Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 DBCC SHRINKDATABASE

Author  Topic 

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2006-07-31 : 17:28:21
Ok,

I'm currently doing some monthly maintenance on our databases and after reviewing some of our DTS-packages I encountered a few packages that were running the DBCC SHRINKDATABASE-statement. A little research showed that this statement alone takes about 1 hour and 10 minutes to run on a 6.5GB database. Is this a waste of time? I'm only learning the ways of the DBA but something tells me that we don't really have to do this. What are possible scenarios in which this needs to be done on a regular basis (daily!)?

--
Lumbago
"Real programmers don't document, if it was hard to write it should be hard to understand"

Kristen
Test

22859 Posts

Posted - 2006-07-31 : 17:46:22
"Is this a waste of time?"

Yes, YES, YES, YES, YES, YES

The ONLY scenario where SHRINK is needed is where you have done a one-time exceptional delete. The rest of the time the log-space that you used will, tomorrow, be needed by something similar and doing a SHRINK will just cause the database to re-grow tomorrow - at the cost of considerable CPU resources, and disk fragmentation.

Don't Shrink!

Kristen
Go to Top of Page

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2006-07-31 : 20:33:19
Gotcha I'd just like to add (for the record) that I didn't make these DTS packages and that I hate DTS in general. There...I said it..all in favor say "aye"

--
Lumbago
"Real programmers don't document, if it was hard to write it should be hard to understand"
Go to Top of Page

jen
Master Smack Fu Yak Hacker

4110 Posts

Posted - 2006-07-31 : 20:50:21
you may want to research further the reason behind this statement in the packages, the creator may have been trying to prevent the packages from breaking by introducing the shrink, hdd space may not be enough for next dts to run and he already requested for additional hardware but management didn't see the impact and rejected his request...something like that

i find dts packages valuable

--------------------
keeping it simple...
Go to Top of Page

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2006-07-31 : 21:31:33
Shame on you! DTS are badbadbad! hehe...

I did actually ask the guy who created it why it was there and naturally it was a diskspace issue in the SAN at some point, but I didn't confront him with what I felt about it. But in a time when you can pay $9.95 a month for 100GB of diskspace at some webhost I'll have a hard time accepting the sad face from the sysadmin claiming that we'll have an upgrade next fall.

--
Lumbago
"Real programmers don't document, if it was hard to write it should be hard to understand"
Go to Top of Page

jen
Master Smack Fu Yak Hacker

4110 Posts

Posted - 2006-07-31 : 21:38:53
are you working where I work???

--------------------
keeping it simple...
Go to Top of Page

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2006-07-31 : 21:41:36
Aparently

--
Lumbago
"Real programmers don't document, if it was hard to write it should be hard to understand"
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-07-31 : 22:31:43
there are actually a couple of reasons I can think of to shrink the database:

1. You are running out of space in the filesystem
2. You are going to migrate the database to another host or filesystem and want the files as small as possible for the copy.
3. You switched from FULL or Bulk Logged to Simple logging

None of these situations would be something you do routinely. These should be rare occurances.

Routinely shrinking datafiles will lead to external file fragmentation - which can really hurt performance and be hard to identify. If this job has been running for a long time you might have datafiles with hundreds of thousands of fragments. To be clear, I don't mean hundreds or thousands, I mean 100,000+ fragments per datafile.




-ec
Go to Top of Page

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2006-07-31 : 22:59:40
Well ec...you'll be (as I) thrilled to know that this shrinking business is going on every single day, and has probably been like that for at least a few years. Now how did I check that fragmentation again...? Hm...I'll find it somewhere.

--
Lumbago
"Real programmers don't document, if it was hard to write it should be hard to understand"
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-07-31 : 23:03:29
quote:
Originally posted by Lumbago

Well ec...you'll be (as I) thrilled to know that this shrinking business is going on every single day, and has probably been like that for at least a few years. Now how did I check that fragmentation again...? Hm...I'll find it somewhere.




I use the contig comand line tool from sysinternals http://www.sysinternals.com/Utilities/Contig.html

use the -v and -a switches only. This will analyze the fragmentation, but not try to fix it. To fix, you have to offline the database.

It would be interesting to see what the damage is. Worst I have ever seend is 230,000+ file fragments. I suppose the fragmentation problems is worse on a filesystem that is close to being full or has gotten completely full before (if that is any consolation).


-ec
Go to Top of Page

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2006-07-31 : 23:06:35
DBCC SHOWCONTIG does the trick...does this tell you anything? In thing that I didn't mention that might be relevant is that the entire database is recreated every day truncating every table and filling it up again. That might prevent some fragmentation woun't it? Here's what I got:
DBCC SHOWCONTIG scanning 'lan' table...
Table: 'lan' (2133582639); index ID: 0, database ID: 8
TABLE level scan performed.
- Pages Scanned................................: 47455
- Extents Scanned..............................: 5935
- Extent Switches..............................: 5934
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.95% [5932:5935]
- Extent Scan Fragmentation ...................: 26.22%
- Avg. Bytes Free per Page.....................: 1571.0
- Avg. Page Density (full).....................: 80.59%
DBCC execution completed. If DBCC printed error messages,
contact your system administrator.


--
Lumbago
"Real programmers don't document, if it was hard to write it should be hard to understand"
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-07-31 : 23:08:53
DBCC SHOWCONTIG is for internal fragmentation only. The CONTIG tool from sysinternals will show you external fragmentation.

just point contig at one of your datafiles and run with the analyze and verbose options turned on (I think -a -v).



-ec
Go to Top of Page

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2006-08-01 : 08:47:59
Hm, this wasn't too bad was it? ->
Summary:
Number of files processed : 32
Average fragmentation : 24.8125 frags/file
Highest value is for msdbdata.mdf with 358 fragments...

--
Lumbago
"Real programmers don't document, if it was hard to write it should be hard to understand"
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2006-08-01 : 10:37:37
quote:
Originally posted by Lumbago

Hm, this wasn't too bad was it? ->
Summary:
Number of files processed : 32
Average fragmentation : 24.8125 frags/file
Highest value is for msdbdata.mdf with 358 fragments...





not bad. that isn't even worth worrying about.


-ec
Go to Top of Page
   

- Advertisement -