SQL Server Forums
Profile | Register | Active Topics | Members | Search | Forum FAQ
 
Register Now and get your question answered!
Username:
Password:
Save Password
Forgot your Password?

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Indexing Result (DBCC ShowContig)
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

AskSQLTeam
Ask SQLTeam Question

USA
0 Posts

Posted - 12/09/2005 :  07:26:29  Show Profile  Visit AskSQLTeam's Homepage  Reply with Quote
Nandip writes "I have a MS Sql 2000 Database around 2.5 GB. Its an OLTP database. Hence there are lot of modifications

done on daily basis to one of the main table.
As a result of this the transaction log grows up every few days. Also the queries run slow

after 4-5 days. Hence i have been reindexing the database every 4-5 days to speed up the

process.
Now following are the list of steps that i take.

I do all this during off peak time

1) Shrink the log file
BACKUP LOG [Databasename] WITH TRUNCATE_ONLY
DBCC SHRINKFILE(Databasename_Log,1,NOTRUNCATE)
DBCC SHRINKFILE(Databasename_Log,1,TRUNCATEONLY)
2)
run DBCC SHOWCONTIG command on the table that is heavily modified, to check the

fragmentation level.

DBCC SHOWCONTIG(tablename)

3) After i get the results and if i see that the table needs to be defragmented and the

indexes need to be rebuilt, i run the DBCC DBREINDEX command.
DBCC DBREINDEX(tablename,'',0)

The execution time of rebuilding all indexs on this table takes around 15-20 mins.

4)
After the query has been executed, i again run the DBCC SHOWCONTIG command to see if the

table has been defragmented correctly or not.
The things that i see to make sure that the table is fragmented as i also read in BOL are
a) the value of Extent Switches should be as close to value of Extent Scanned
b) Logical Scan Fragmentation and Extent scan fragmentation. Their values should be
close to 0
c) Avg. Bytes free per page which should be low and Avg. Page density (full) which
should be high

5)
Once the indexes are rebuilt the data file increases by 2 GB and log file size increases
by 800 MB to 1 GB.
Hence the first thing that i do is shrink back the transaction files with the same command

that i have shown in step 1


6) After this i check the database size and unallocated space by running following command
sp_spaceused @updateusage = 'True'
After this command is execute, i see a large size of unallocated space.
(i don't have exact idea how does this unallocated space works. Is it generally high? If

some one could also explain me on this i would appreciate it)

7) Since i am not very sure on the unallocated space or whether the size should be kept what

it shows, i just go ahead and also shrink the data file with follownig command

DBCC SHRINKFILE(Databasename_Data,1,NOTRUNCATE)
(it takes like around 15-20 mins. to run execute this command)

DBCC SHRINKFILE(Databasename_Data,1,TRUNCATEONLY)

8) After this when i again check the space used sp_spaceused @updateusage = 'True'
the size in the unallocated space column is reduced drastically also reducing the total
size of the database file back to what it was orignal


9) After this i update the statistics manually (sp_updatestats)

10) After the statistics have been updated, i recompile all the necessary stored procedures
and triggers.

All this is done within max 1 hour.

Now i had couple of questions on this.

1) Am i doing this the right way? Is there any other way to rebuild the indexes, increase

the query speeds and have the database size limited

2) One thing that i note is that after i complete all these steps, i again run
DBCC SHOWCONTIG command for the table to make sure that the index building process hasn't

been messed up after that successive steps that i took and one thing that i see being

changes is the 'Logic Scan Fragmentation' which goes up back to around 95%. The rest of the

things remain same.
Does this mean that after i shrinked the data and log files and updated the statistics, my

data again got fragmented.
If so then i would have to run the DBCC DBReindex command again but that again that would
increase the log file and dat

SQLServerDBA_Dan
Aged Yak Warrior

USA
752 Posts

Posted - 12/09/2005 :  08:36:26  Show Profile  Reply with Quote
First, here is a good write up on reindexing: http://sql-server-performance.com/rd_index_fragmentation.asp

Second, Unless you have an extreamly space restricted machine I would not shrink anything. It's more of a waist of time, IMO. The reindex process is taking a much longer time because each time it fills the log it needs to grow. You probably have this set to the default 10% growth meaning that if the log file is truncated and shrunk down to 1mb then it has to do a lot of growing to get back to the 1gb you see at the end of the reindexing. Some people change the recovery model to simple prior to reindexing. Others just make sure that there is very little used space and a lot of free space in the log before reindexing.

Third, if you do dbcc indexdefrag then you will need to update statistics manually; however, running dbcc dbreindex also updates statistics. Using sp_updatestats could be a good idea if you didn't end up reindexing a lot of tables, but keep in mind that dbreindex updates the statistics too.

"Does this mean that after i shrinked the data and log files and updated the statistics, my data again got fragmented."

You may also want to look over the MS Webcast's on indexing. I know one of them went into detail on DBCC DBReindex and what to look at. If it wasn't this one (http://support.microsoft.com/default.aspx?scid=kb en-us 814324) then try the others.

Have fun

Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

USA
7020 Posts

Posted - 12/09/2005 :  10:55:35  Show Profile  Reply with Quote
I think your problem is that you have the database set to full recovery mode, and you are not doing transaction log backups, so the transaction log just keeps growing.

What you should do is:
1. Setup a daily job to do a full backup of the database (if you aren't making one already)
2. Setup a job to backup the transaction log for that database every 15 minutes.

This will keep the size of the transaction log file under control, and you will also have backups to be able to recover your database.

You can read about how to do this in SQL Server Books Online.


quote:
Originally posted by AskSQLTeam

Nandip writes "I have a MS Sql 2000 Database around 2.5 GB. Its an OLTP database. Hence there are lot of modifications done on daily basis to one of the main table. As a result of this the transaction log grows up every few days.


CODO ERGO SUM
Go to Top of Page

paulrandal
Yak with Vast SQL Skills

USA
899 Posts

Posted - 12/10/2005 :  11:06:20  Show Profile  Visit paulrandal's Homepage  Reply with Quote
Don't shrink your database. Shrink causes index fragmentation - I've specifically documented this in the BOL for SQL Server 2005.

You only need to worry about fragmentation in indexes used for range scans - see our whitepaper below for more details.

http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx

Regards

Paul Randal
Lead Program Manager, Microsoft SQL Server Storage Engine
(Legalese: This posting is provided "AS IS" with no warranties, and confers no rights.)
Go to Top of Page

derrickleggett
Pointy Haired Yak DBA

USA
4184 Posts

Posted - 12/10/2005 :  13:10:10  Show Profile  Visit derrickleggett's Homepage  Send derrickleggett an AOL message  Send derrickleggett a Yahoo! Message  Reply with Quote
Paul, we used that white paper and some articles from Kalen to create our own customized indexing process. We have it table-driven so we can pass parameters in on what should be reindexed or defragged. We can specify tables to have higher priority, variations in fragmentation, time to run, etc. It also gives us a detailed report each day so we can do analysis and change fill factors, index design, etc.

It's a good white paper.


MeanOldDBA
derrickleggett@hotmail.com

When life gives you a lemon, fire the DBA.
Go to Top of Page

paulrandal
Yak with Vast SQL Skills

USA
899 Posts

Posted - 12/10/2005 :  18:32:01  Show Profile  Visit paulrandal's Homepage  Reply with Quote
Thanks for the feedback - great to hear its useful. We'll be releasing a version for SQL Server 2005 in Q1CY06.

Best regards.

Paul Randal
Lead Program Manager, Microsoft SQL Server Storage Engine
(Legalese: This posting is provided "AS IS" with no warranties, and confers no rights.)
Go to Top of Page

Kristen
Test

United Kingdom
22431 Posts

Posted - 12/11/2005 :  02:57:01  Show Profile  Reply with Quote
What they all said:

Don't shrink anything that occurs in normal daily use. The disk space will be needed again tomorrow ... and its expensive to get it back again.

IMO you should set your database to "extend" by a fixed amount, rather than a percentage. Extending a 2.5GB database by 10% takes several minutes, IME, and things start to timeout etc.

If a table is big then use DEFRAG instead of REINDEX (reindex is copying the whole shooting-match to a new part of the database, and then deleting the old stuff - hence the increase in space your are seeing), DEFRAG will do it in place (and the database is available whilst it runs, and it can be interrupted - retaining the Work-Done).

Make sure you are either running in SIMPLE recovery mode [which means you will ONLY be able to restore to a full backup] or FULL [which means that a) you MUST take transaction log backups regularly (15 minutes or hourly would be good) and b) you will be able to restore to a point-in-time]

Take a look at Tara's Blog from some ideas [and code!] on Admin procedures:

http://weblogs.sqlteam.com/tarad/category/95.aspx

Kristen
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
SQL Server Forums © 2000-2009 SQLTeam Publishing, LLC Go To Top Of Page
This page was generated in 0.16 seconds. Powered By: Snitz Forums 2000