Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Cannot shrink a large database

Author  Topic 

kencowin
Starting Member

5 Posts

Posted - 2007-01-03 : 20:53:38
Hello.

Have an 85GB database made up of a bunch of table. 3 of the tables have any size to them but these are in the 20-35GB of allocated space. According to SHOWCONTIG, these tables are only using 2 percent of the space on each page. All the stats I can produce show me that all the data on this database could fit in 11GB and that I have way too many pages in the 3 tables mentioned.

Some of these tables have clustered indexes and some have only nonclustered indexes. This database has been growning in leaps and bounds (8GB a month) to get to 85GB. The drive the data and log (using 400MB) files are on only have 28GB of free space, so I have that going for me . To boot, who ever set up this serer with SQL for the application running, used Personal Edition 2000 of SQL rather than Standard or Enterprise.

Tried running SHRINKFILE to get size down to 40GB, but it only took off a few GB. Did compressing options to move pages to front of file and also to indicate a fill factor of the maximum percentage of free space in the files to be 15-20%. But again, no change.
Am I fighting the fact that this is Personal Edition, do not have much room left on the drive for SQL to do its thing, or something else ??

Kristen
Test

22859 Posts

Posted - 2007-01-04 : 02:28:12
Sounds like you need to defrag the indexes (NOT rebuild them if you are tight for space!)

"and also to indicate a fill factor of 15-20%"

If you mean by this that the indexes are set to 15-20% fill factor that may well be the problem - that's a lot of slack in each index. Can't remember if DEFRAG honours the Fill Factor, but if it does you may not get much improvement.

See also: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55210&SearchTerms=Optimising+your+indexes+with+DBCC+INDEXDEFRAG

Kristen
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-01-04 : 09:59:44
Yes, you should change those fillfactors, or investigate why they were set so low. That's the reason your DB is so big but has so much free space on each page.
Go to Top of Page

kencowin
Starting Member

5 Posts

Posted - 2007-01-04 : 12:28:29
Kristen,

Thanks for responding. I mis-typed on my problem description. The 15-20% I typed was the amount of free space to reserve when doing the SHRINKDATABASE rather than the a fill factor percentage. Sorry.

On reading about DEFRAG, it looks like it uses the fill factor setting currently in effect for an index. I checked the fill factor setting for the indexes on the large tables in the database and it is set to 90%. The DBREINDEX command one can set the fill factor to override the current/original setting, but the current of 90% sounds good.

I did SHOWCONTIG on the three indexes in one of these large tables and got the results below where each index numbers looked about the same:

DBCC SHOWCONTIG scanning 'act_log' table...
Table: 'act_log' (52195236); index ID: 3, database ID: 7
LEAF level scan performed.
- Pages Scanned................................: 1730
- Extents Scanned..............................: 266
- Extent Switches..............................: 271
- Avg. Pages per Extent........................: 6.5
- Scan Density [Best Count:Actual Count].......: 79.78% [217:272]
- Logical Scan Fragmentation ..................: 97.86%
- Extent Scan Fragmentation ...................: 97.74%
- Avg. Bytes Free per Page.....................: 863.9
- Avg. Page Density (full).....................: 89.33%

I did SHOWCONTIG on the table itself and got these results:

DBCC SHOWCONTIG scanning 'act_log' table...
Table: 'act_log' (52195236); index ID: 0, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 281041
- Extents Scanned..............................: 280925
- Extent Switches..............................: 280924
- Avg. Pages per Extent........................: 1.0
- Scan Density [Best Count:Actual Count].......: 12.51% [35131:280925]
- Extent Scan Fragmentation ...................: 99.59%
- Avg. Bytes Free per Page.....................: 7914.3
- Avg. Page Density (full).....................: 2.22%

Now the values on the table to me do look bad. 1 page per extent, scan density pf 12%. ave free space per page of 7914, and ave page density of 2% all look bad and would account for a table using way too much space for the amount of data actually there.

Doing a sp_spaceused on the database and one of the large tables within shows:

Database Database_size Unallocated_space
AHD 86,766.75 MB 431.20 MB

Reserved Data Index_size Unused
88,273,464 KB 11,440,992 KB 336,624 KB 76,495,848 KB

Table Name Rows Reserved Data Index_size Unused
act_log 450,120 36,084,184 KB 4,505,280 KB 68,344 KB 31,510,560 KB

Do you still think a defrag on the indexes will do the trick ? Would I need to follow the Defrag up with another action or two to get this database down from 80+GB to the 20GB level ?

Thanks
Go to Top of Page

kencowin
Starting Member

5 Posts

Posted - 2007-01-04 : 12:35:10
CrazyJoe,

I mis-typed the fill factor being 15-20%. I should have said the amount of free space to leave was 15-20% (on the EM's Shrinkdatabase screen). SO, I was setting the data to be shrunk with a fill factor of 80-85%.

Sorry for the confusion. I also posted some new info to chew on.

This problem has definitely caused me to dig into areas I had not before....

Thanks
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-01-04 : 13:09:50
My recollection is:

Scan Density -- 100% is best, 0% worst
Logical Scan Fragmentation - Lower is better
Extent Scan Fragmentation - Lower is better
Avg. Bytes free per page - Lower is better
Avg. Page density (full) - Higher is better

so it doesn't look good!

I reckon you should try a DEFRAG. Its interruptible - so if it runs for ages you could kill it and check the SHOW CONTIG to see if its made any improvement.

"Would I need to follow the Defrag up with another action or two to get this database down from 80+GB to the 20GB level ?"

You'll need another Shrink attempt once the indexes are more optimised - there will be more free-space in the database, reclaimable using Shrink.

Defrag is not going to update the Statistics - so you may need to do that too.

Kristen
Go to Top of Page

kencowin
Starting Member

5 Posts

Posted - 2007-01-05 : 22:35:44
Thanks Kristen and Crazyjoe. Will do some defraging of the indexes this weekend (slow time for production). Will also create a temp clustered index on an appropriate field(s) to get the data re-sequenced and hopefully packed tighter. Then delete the temp clustered index.

By the way, I created a test database on the same instance and copied a couple of these larger tables. One table was 5GB on original production database and the copied table on the test database only took 22MB!! This table did not have a clustered index. The other table I copied was 28GB with a clustered index and the copied version only took up 1.2GB.

Hoping not to have to do such drastic moves (copy individual tables to a temp database and then copy them back to original db) and that the DEFRAG and building temporary clustered indexes will pack these tables tight. Then I will shrink the database.

Stay tune.

Thanks
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2007-01-06 : 01:33:03
If the tables do not have any clustered indexes it will probably be faster to do this on each table:
1. Script out all the table definitions, including indexes and constraints, and save it in a text file for later reference.
2. Drop all non-clustered indexes, non-clustered unique constraints, and non-clustered primary key constraint on the table. If you drop constraints, make sure you do it when the database is not being used.
3. If the table has a clustered index or constraint, run a DBCC DBREINDEX on the table using a fill factor or 80%. If the table doesn't have as clustered index or constraint, then create a clustered index with a fill factor on 80% and leave it. If you have a primary key constraint on the table, it might be a good choice; re-create it as a clustered primary key with a fill factor on 80%.
4. Re-add all the remaining indexes with a fill factor of 80%.

This may not be an ideal index plan, but it is likely many times better than what you have now. Frankly, it sounds like nothing could make your current situation worse. More than likely, you will also see a lot better query performance.

After you do this, you should develop a plan to re-index or defrag the indexes on a regular basis to not let the situation get out of control again.




CODO ERGO SUM
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-01-06 : 02:29:59
"make sure you do it when the database is not being used"

Just to emphasize what MVJ said: if you drop the constraints then data can be entered into the database which violates the constraints. You won't then be able to reapply the constraints later (OR you will have duff-data!!), hence make sure the database is not being used

"then create a clustered index with a fill factor on 80%"

I like MVJ's "starting with some improvements is better than nothing", but I think that IF your Clustered Index is for a key based on an IDENTITY column you should use 100% instead. If you aren't sure for any reason then stick to 80%.

Kristen
Go to Top of Page

kencowin
Starting Member

5 Posts

Posted - 2007-01-07 : 15:09:34
All,

Thanks for your input !

Yesterday, when usage was low for this application, there was about 9 tables of any size (2-35GB) and all but one had no clustered index. I found that if I created a cluster index on the primary key field (all were an id field) and I used a 95% fill factor, that I got anywhere from a 50% to 90% space gain. Doing further defrag and reindexes did not seem to help anymore than that. Then I deleted this newly created clustered index.

After working with all 9 tables, the database was still sitting at 88GB, but it had about 30GB of freespace. Then did a Shrinkfile doing the compress option. After 2 hours, the database was at 53GB with an additional 20GB free. The Shrinking and compressing seemed to come up with still more free space but had not released this extra free space.

Planned on doing some more shrinking next weekend, but did not want to wait another two hours yesterday.

I am thinking I should re-create those clustered indexes and leave them out there. Pretty much all the tables I was working with have an id field as its key where rows are continually added with this id field keep on bumping up. There is little data deleted once it is added at this point. So, I do not have to worry much about later page splits in trying to add keys in the middle.

Definitely opportunities for further improvements, but getting this space knocked down will takes the pressure off.
Go to Top of Page
   

- Advertisement -