Author |
Topic |
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2007-07-06 : 16:47:12
|
Guys,We have INDEXDEFRAG job running every few days. On our heaviest environment, it takes about 30 minutes to run. During the time it is running, we have many timeouts of stored procedures (a timeout occurs when a stored procedure takes > 30 seconds to complete). All of our stored procedures take under a second to complete during normal times. And when defragmentation job is off, we have no timeouts.I know that we need to use defragmentation as our tables are constantly being deleted to and inserted from, thereby fragmenting the indexes. However, we do not want to have timeouts either.Can anyone clue me in as to why timeouts occur during this process and is there any way to modify it to reduce the number of timeouts. The script I use to perform INDEXDEFRAG is shown below (I got it from some website)./*Perform a 'USE <database name>' to select the database in which to run the script.*/-- Declare variablesSET NOCOUNT ONDECLARE @tablename VARCHAR (128)DECLARE @execstr VARCHAR (255)DECLARE @objectid INTDECLARE @indexid INTDECLARE @frag DECIMALDECLARE @maxfrag DECIMAL-- Decide on the maximum fragmentation to allowSELECT @maxfrag = 10.0-- Declare cursorDECLARE tables CURSOR FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'-- Create the tableCREATE TABLE #fraglist ( ObjectName CHAR (255), ObjectId INT, IndexName CHAR (255), IndexId INT, Lvl INT, CountPages INT, CountRows INT, MinRecSize INT, MaxRecSize INT, AvgRecSize INT, ForRecCount INT, Extents INT, ExtentSwitches INT, AvgFreeBytes INT, AvgPageDensity INT, ScanDensity DECIMAL, BestCount INT, ActualCount INT, LogicalFrag DECIMAL, ExtentFrag DECIMAL)-- Open the cursorOPEN tables-- Loop through all the tables in the databaseFETCH NEXT FROM tables INTO @tablenameWHILE @@FETCH_STATUS = 0BEGIN-- Do the showcontig of all indexes of the table INSERT INTO #fraglist EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS') FETCH NEXT FROM tables INTO @tablenameEND-- Close and deallocate the cursorCLOSE tablesDEALLOCATE tables-- Declare cursor for list of indexes to be defraggedDECLARE indexes CURSOR FOR SELECT ObjectName, ObjectId, IndexId, LogicalFrag FROM #fraglist WHERE LogicalFrag >= @maxfrag AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0-- Open the cursorOPEN indexes-- loop through the indexesFETCH NEXT FROM indexes INTO @tablename, @objectid, @indexid, @fragWHILE @@FETCH_STATUS = 0BEGIN PRINT 'Executing DBCC INDEXDEFRAG (0, ' + RTRIM(@tablename) + ',' + RTRIM(@indexid) + ') - fragmentation currently ' + RTRIM(CONVERT(varchar(15),@frag)) + '%' SELECT @execstr = 'DBCC INDEXDEFRAG (0, ' + RTRIM(@objectid) + ',' + RTRIM(@indexid) + ')' EXEC (@execstr) FETCH NEXT FROM indexes INTO @tablename, @objectid, @indexid, @fragEND-- Close and deallocate the cursorCLOSE indexesDEALLOCATE indexes-- Delete the temporary tableDROP TABLE #fraglistThanks a lot! |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2007-07-06 : 17:05:04
|
That's just how it works, so increase your timeout value so that the queries will at least complete.Tara Kizerhttp://weblogs.sqlteam.com/tarad/ |
 |
|
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2007-07-06 : 17:14:59
|
I think there must be a way around it. I don't think it is very enjoyable to click on a website link and then wait a minute for it to return some results ...Since we run INDEXDEFRAG at night, the load is at least less heavy ...Thank you |
 |
|
Haywood
Posting Yak Master
221 Posts |
Posted - 2007-07-06 : 17:18:12
|
INDEXDEFRAG will accumulate some locks, unlike REINDEX which will lock the table in it's entirety. It sounds like your procedures are encountering some blocking while the defrag runs. You may want to visit the procedures in question and see if you can reduce thier locking and thereby reducing the blocking that occurs during the indexdefrag.You may also want to look at your physical design of the tables/indexes that produce the heaviest blocking during the defrag. There may be logical issues with the storage of the data that causes the defrag to have to aquire larger locks to complete an i/o operation. |
 |
|
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2007-07-06 : 17:26:32
|
Haywood,All the stored procedures in question are using WITH(NOLOCK) on all the tables used. Can there by any other type of locking going on?Thanks |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2007-07-06 : 17:28:21
|
You might have a resource bottleneck of the server. I would check out the CPU, Disk IO, memory, etc. with performance monitor to see if you can identify the problem.Another thing I would look at is if it is the DBCC INDEXDEFRAG or the DBCC SHOWCONTIG that is causing the problem. DBCC SHOWCONTIG can be very resource intensive, so make sure that is not what is causing the problem.Also, you should try being more selective with your reindexing to make sure you are only doing as much as necessary and that it is actually improving performance. If the majority of your transactions are small, direct lookups against a small number of rows, reindexing that often probably is not necessary.Another thing to look at is your transactional procedures themselves. Maybe they are inefficient or have poor query plans, and the resource constraints are highlighting the problem.CODO ERGO SUM |
 |
|
Haywood
Posting Yak Master
221 Posts |
Posted - 2007-07-06 : 17:31:43
|
If they're all running NOLOCK, it may not be blocking, but you should look to see if there is blocking.Given that, I'd have to agree that there may be resource issues and you should follow Michaels advice. Also, one thing that I've done to alleviate the blocking that the SHOWCONTIG causes is to run the SHOWCONTIG on a restored-to reporting server (restores don't eliminate fragmentation) and build my defrag scripts from it to execute on production. It's a bit more work, but on busy servers it can be worth the effort... |
 |
|
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2007-07-06 : 17:54:03
|
Michael and Haywood - Thanks for the advices.I will try these things out! |
 |
|
spirit1
Cybernetic Yak Master
11752 Posts |
Posted - 2007-07-06 : 18:00:22
|
i'm going to write the same answer here that i gave you on the sql-server-performace:are your stored procedure doing inserts or updates that would require table locking?or page locks that escalate to table locks?because even if you use WITH FAST updates and inserts that require an exclusive table lock can't be performed.because there's no such thing as nolock insert/update_______________________________________________Causing trouble since 1980blog: http://weblogs.sqlteam.com/mladenp |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-07-09 : 03:30:12
|
"We have INDEXDEFRAG job running every few days."If you ran it every night it should have less work to do, and may block for shorter times."All the stored procedures in question are using WITH(NOLOCK) on all the tables used"Scares the bejesus out of me when I hear of people using NOLOCK as a tool-of-choice in their designs.Kristen |
 |
|
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2007-07-09 : 10:50:07
|
Spirit1 - The stored procedures are using SELECT only. And these selects are not so complicated. As stated earlier, during normal times all these stored procedures take under a second to return resultsKristen:1. We have been running them every night initially. I am aware that it takes them less to do if we run them every night. However, to at least not have timeouts some days (although more some other days) we decided to run them only twice a week.2. What is so wrong with WITH(NOLOCK)? Although our data is constantly being updated, we don't mind doing dirty reads.Thanks for the info! |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2007-07-09 : 11:02:55
|
quote: Originally posted by sql_er...2. What is so wrong with WITH(NOLOCK)? Although our data is constantly being updated, we don't mind doing dirty reads...
What you are saying is that it is OK to have information that is incorrect.Dirty reads means that the data you are getting may not be in a consistent state, and may never actually be committed. It is a very bad practice to do this as a routine method.CODO ERGO SUM |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-07-09 : 11:11:16
|
"we decided to run them only twice a week"I don't really understand that. Apart from whether it might time out anyway, or not, why wouldn't you want your tables optimal every day if you could? Int he absence of some scheduled downtime, say, once a week and if you have a similar "quiet" period every night, why wouldn't you run it every night?"we don't mind doing dirty reads"Well that's fine then. Personally I can't imagine a case in which it was an acceptable Business Rule for end users to see dirty reads.Kristen |
 |
|
sql_er
Constraint Violating Yak Guru
267 Posts |
Posted - 2007-07-09 : 12:14:22
|
Michael, I agree with you. It is just that I am maintaining what was originally designed and don't think I can change much here. Although we might have issues, it is not so very bad. The stored procedures run against tables into which data is constantly being inserted to by applications, which we assume are working flawlessly, and no updates are ever done - just constant inserts.Kristen, our scheduled downtimes are once every 3 months. As for 'quiet times', we have them at night, although there is some activity still, and some people here are unhappy that we have timeouts ...Thanks a lot! |
 |
|
|