Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 Script taking more time to execute ????????

Author  Topic 

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-08 : 07:51:16

Hi All,

Need some expert advice on the below scenario.
As per the bussiness need we are under the increasing the size of the database.
The database size is 40 Gb.
Basically , this is what is happening inside the script.
Dropping all the constraints (including constriants i.e pk , )
Then a series of ALTER TABLE statements to increase the size of the columns in all the tables
having that column.
Again, re-creating the indexes.

It is taking more than 5 hours. In the middle we ran out of Log space. what i have done is, i have increased the log space to 12 Gb.
Changed the recovery model to Simple and then started the script again.This time it completed within 2 hrs.

Once everything is fine, i shrinked the log file and changed the recovery model to FULL back.

The same script has to be executed against one more database which is 60gb and contains the
same set of tables. I also changed the recovery model to SIMPLE for minimal logging, but for this
it is taking more than 5 hrs.

What should i do to boost the performance?
do you want to increase the size of tempdb also? for creating Primary Keys( all the data inside the table should be sorted).

Now i have few questions

1. Dropping the constraints would not take much time. Am i right since i have changed the recovery model to SIMPLE.
2. Creating the constraints would require more space/capacity planning in mdf and ldf for creation of constraints / indexes

How can i estimate the space so that i can allocate in before hand so that i can avoid the autogrowth?


3. recreation of primary keys will create clusterred index which would take place in tempdb. i have made enough room for this also. tempdb SIZE : 5442.87 MB , tempdb SPACE AVAILABLE : 5373.34 MB which shouldn't be a problem while creating the indexes. But why it is taking more time?
4. enusured no other connections are open except me.
5. Avaiable RAM on that machine is 2 GB.

Any other aspects, do i need to consider where i missed out???? How to figure out what 's happening inside ? what can be my next steps?

Looking for sincere replies.

Thanks in Advance!

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2010-01-08 : 10:10:18
make sure when dropping indexes that you drop your nonclustered before clustered. when recreating them, create clustered 1st.
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-08 : 13:19:39
what if we miss the order?

What about the performance to reduce the time?
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-01-08 : 13:35:23
Make sure you do what russell said as that's how to make it faster.

Use the SORT_IN_TEMPDB option for the index creations if you are using Enterprise Edition.

Stop reducing the files via shrinks to avoid the autogrowth.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-09 : 11:23:56
Hi guys,

I have got an important point to figure out here. It is taking hell out of time.
I stopped the script execution.

I found few front end applications are being connected to server. So killed all those connections.

Next, i also stopped the SQL Server Agent and ensured no jobs are running.

I tested one alter script with NOCHECK option, and am trying to add a PRIMARY KEY WHICH IS AGAIN A CLUSTTERED.

EXAMPLE)

ALTER TABLE [DBA].[TNAME]
WITH NOCHECK
ADD CONSTRAINT (PK_ID) PRIMAY KEY CLUSTERED
(ID,
DATEPERIOID,
SECID,
ELEMENTID,
INSTID,
RACID
) WITH FILLFACTOR = 90 ON PRIMARY.

This is the statement i have it.

This enabling the constraint is taking 40 mins and i have clearly observed that.

To reduce the logging in the log file , before running the script i have changed the recovery model to "SIMPLE" rather
going for "FULL"

Here are quick questions.

1. I have observd that 2 crore data is there in the table.
IT took 40 mins.
My question is , i am just adding the Primary constraint over here with clustered index with NOCHECK that means
no data check shud happen to the exisiting data and should create the constraint. But why it is taking so much long time.

I am exucting the space monitoring script for temp db and the database which am executing the alter script.
I can see there is lot of free space avaiable in all the data files as well tempdb as well.

and while executing, i can see a small MB size changes in db1log.ldf nothing more than that.

Tempdb also looks good!!!

System configuration

tempdb.mdf 3GB
tempdb.ldf 2 gb

db1_01.mdf 10gb
db1_02.ndf 10gb
db1_03.ndf 10gb
db1_04.ndf 10gb
db1_05.ndf 10gb
db1log.ldf 12GB


I have executed to monitor the space whether anything is happening are not (i.e. any index creation ) or any thing.
But it is taking hell lot of time.


USE tempdb
go
SELECT Name, Filename,
CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],
CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],
CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]
FROM dbo.sysfiles a (NOLOCK)
go

use db1
go

SELECT Name, Filename,
CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],
CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],
CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]
FROM dbo.sysfiles a (NOLOCK)
go



My question why it is taking so much time,
Question 1 : Even though i have'nt specified the WITH CHECK option, does SQL Server is doing the row by row constriant check?
Question 2 : When it is creating ( i believe ) in the above case , does it re-arraning the 2 - crore data in the data file as it is a clusutered index.?
Any re-organization is being done inside the data files (PRIMARY ) in this case.?????

Please , am looking forward experts exchange thoughts!!!!!!!!!!!

It took the whole to change / run the scripts still the scripting is being executed.

Can anybody explain what is happening inside sql server in such scenario????????

Pl bear with my language. Am totally frustrated !!

Thanks in Advance
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-09 : 11:28:27
Guys,

The order we are creating is RUSSEL order only

1.primary key clusstered indexes
2. then unique indexes
3. then foreign keys

Even then it is taking hell lot of time!!!!!!!!
Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2010-01-09 : 16:03:47
Can try setting database to single user mode

ALTER DATABASE mydb SET SINGLE_USER with ROLLBACK IMMEDIATE

make sure everything is rolled back before you start adding indexes.

don't bother check/nocheck. just create your index. do you really need the fillfactor?

When you create a clustered index, the data must be physically sorted as defined by the clustering key. This will take some time and consume a lot of CPU and I/O. If there are nonclustered indexes on the table then they are rebuilt too which is why we told you to drop nonclustered first, and create clustered first.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-01-09 : 18:06:21
100% fill factor should be used in almost all cases in SQL Server 2005 and 2008. We did extensive performance testing on it and found only 100% performed the best. We were trying to reduce how quickly fragmentation was occurring, but switching fill factors was not a solution for us.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-10 : 01:16:31
Do you mean to say, instead of giving the 90% fill factor give 100%??

if i give 100% what will be the affect on that table for future updates ?

Thank You.

Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-10 : 01:17:31

Here one of the sql statement which is taking more time.

ALTER TABLE TNAME ALTER COLUMN VERSIONID CHAR(10);

Table contains 4 crore data in it.

It took me 40 minutes to execute the above statement.

Before running , i made the database to SIMPLE recovery model and executed.

I have used the below script , to monitor which files in the file group are getting modified and found chnages being
done to .LDF file and took 40 mins.


USE tempdb
go
SELECT Name, Filename,
CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],
CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],
CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]
FROM dbo.sysfiles a (NOLOCK)
go

use db1
go

SELECT Name, Filename,
CONVERT(Decimal(15,2),ROUND(a.Size/128.000,2)) [Currently Allocated Space (MB)],
CONVERT(Decimal(15,2),ROUND(FILEPROPERTY(a.Name,'SpaceUsed')/128.000,2)) AS [Space Used (MB)],
CONVERT(Decimal(15,2),ROUND((a.Size-FILEPROPERTY(a.Name,'SpaceUsed'))/128.000,2)) AS [Available Space (MB)]
FROM dbo.sysfiles a (NOLOCK)
go





1. My question is if the Recovery model is set to "FULL" then how much does it take ? it should take more time. right?
pl correct me , if have done wrong!

2. Also, i want to know what is written into the Log ? Does it writing each and every change to the LOG as like in FULL, only thing when it reaches threshold value
say that is 70 - 80 % then it is overwriting the log file Right ??? Please suggest !!

Rather than SIMPLE , can i go for Bulk Logged model so that i can reduce the writes to my .LDF file?
I am saying this because even though, i loose the data i am having the script to me to again ALTER.

SO, will the Bulk Logged recovery model reduce my Time ???????
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-01-10 : 10:57:48
Trust me that you want 100% fillfactor. My systems are very heavy writes and have no issues with the 100% fillfactor. The reads dramatically decrease performance with anything lower than 100%.

1. Switching recovery models won't really help with performance as the transaction is still logged.
2. It only flushes it in SIMPLE mode after the transaction completes. In FULL, it leaves it there until the tlog backup runs. But leaving it in there until the tlog backup completes is not a performance issues just a disk space issue.

Switching recovery models will not help with performance at all. It just saves on disk space.

You need to check your IO by running PerfMon to see if you are bottlenecked there. Check what values you get for Avg Reads disk/sec and writes for Logical Disk. It should be no higher than 0.012 at most times.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-10 : 12:08:51
Thanks friend.
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-10 : 12:30:54
Hi Kizer,

quote:
You need to check your IO by running PerfMon to see if you are bottlenecked there. Check what values you get for Avg Reads disk/sec and writes for Logical Disk. It should be no higher than 0.012 at most times.


This is the first time i am using the perfmon.

What are the steps involved to monitor the IO in perfmon? any counters concept you are talking about?

Can you list out the steps? I can try and see.

Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2010-01-10 : 18:08:28
START / RUN / type in PERFMON / <Enter>

Go to system monitor and hit the + sign


Tick the select counters from computer radio button and type in the name of your sql server


In the Performance Object drop-down, choose physical disk


Add the counters you're interested in


As Tara suggested, measure avg reads and writes per second. i'd add average disk queue length as well
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-11 : 01:31:47
Hi Kizar,

Hi just ran on my local machine, i am getting the below stats for the counter. Below is the output and you mean to say that the value for the Average below should be greater than 0.012. right ?


Last 0.000 Average:0.000 Minimum: 0.000 Maximum 0.003
Color | Scale | Counter |Instance |Parent |Object |Computer
____ 1000.00 AvgDisk sec/Write _Total --- Physical Disk

Thank You So much!

Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-11 : 01:35:34
One more thing, i want to ask is, if it is less than the value you specified what can be done next step.

Also, i would want to know which other counters we can monitor with respect sql server and what are the corresponding bench marks.

If you have any links, please send me.

Thank You.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-01-11 : 13:31:36
The values should be less than 0.012. Add the counters for the Logical Disk object. It should be "Avg. Disk sec/Read" and "Avg. Disk sec/Write". Both counters should be less than 0.012. Instead of adding _Total, add the counters where the database files (mdf, ldf, ndf) are located as we don't care about the OS drive or other drives that don't involve the databases.

See sql-server-performance.com for what counters to monitor.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-16 : 01:16:00
Thank You very much
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-01-16 : 04:54:40
If you are increasing the size of columns would be it be faster to:

BCP out the data (NOTE: export in PK sorted order)

Drop FKs

Drop table

Recreate table with new column size / constraints / CLUSTER PRIMARY KEY

Import data with BCP using the "sorted in PK order" hint

Recarete secondary indexes and FKs
Go to Top of Page

frank.svs
Constraint Violating Yak Guru

368 Posts

Posted - 2010-01-29 : 09:49:28
Frns,

I would like to attach some screenshots to this forum but could nt find any option to upload the attachments.

How can i paste my screen shot in this forum???

Sorry, if i have asked the wrong question but i want to know.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-01-29 : 12:18:05
You have to upload the images to a public gallery, like photobucket, and then link to that from here.

There is currently no option in this forum to upload photographs [to be hosted directly on sqlteam.com]
Go to Top of Page
    Next Page

- Advertisement -