Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
eddiefdz
Starting Member
21 Posts |
Posted - 2010-04-06 : 21:24:17
|
I have a database which the MDF file is 357gigs in size. I know that the data does not take up that much because when i run a full backup, the backup is only about 220 gigs. I assume that means that the MDF file just has blank space in it. What is the best way to recover that unused space on the disk and shrink down the MDF file to the adequate size?Thanks!Eddie FernandezIT DirectorMTech |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
eddiefdz
Starting Member
21 Posts |
Posted - 2010-04-07 : 09:34:54
|
Well, the reason why i want to shrink the file is because it seems that every time i run maintenance on the database, such as index defragmentation, etc, it causes that empty space to continue growing. I normally run monthly maintenenace which causes that empty file space to grow about 20 gigs each time, well i don't want it to continue growing without me being able to shrink it. My database grows by about 1 gig or so a day. I don't mind leaving some empty space avaialbe, but 130 gigs of empty space is ridicoulous. Is there a way to shrink it directly through SQL? I know there is a shrink file option when you right click on the database, but is that the correct way to do it?Eddie FernandezIT DirectorMTech |
|
|
Kristen
Test
22859 Posts |
Posted - 2010-04-07 : 10:40:12
|
If the next Maintenance Job will add the 130GB empty space back then do NOT shrink the file - that will cause fragmentation next time the file is re-extended which will adversely effect performance.There are various ways around the ginormous amount of space that SQL can use for rebuilding indexes - using a de-fragmentation method, instead of index rebuild, for example.Also only rebuilding indexes which are above a threshold level of fragmentation (rather than rebuilding everything).If you are only doing it once a month then most/many indexes may be badly fragmented. Doing a more "lightweight" maintenance routine more frequently may help reduce the maximum working space that is required at any one time [compared to doing it infrequently] - but you will have to ditch the standard "index rebuild" methods to achieve this. |
|
|
eddiefdz
Starting Member
21 Posts |
Posted - 2010-04-07 : 10:50:16
|
Well, the Maintenance job will not add the 130gb back in one shot, it will grow the empty space back little by little. It took over 1 years worth of maintenance to get the empty space up to the 130, so that's why i thought that shrinking it down would help. I don't mind if it grows little by little as long as its not taking up such a huge amount. I am assuming that if i shrink the file, the MDF file should come down to the about the size of the backup correct? The nighly full backup is at 220gigs, when i shrink the file, can i expect the MDF file to go down to about 220gigs in size?Eddie FernandezIT DirectorMTech |
|
|
Kristen
Test
22859 Posts |
Posted - 2010-04-07 : 11:11:50
|
If the MDF file has grown in the last year (you say "My database grows by about 1 gig or so a day"), and you have not done any major purge / deletion, then it needs the slack space.Maintenance won't use up the slack space little-by-little, but the normal nature of Index Rebuild is to copy the whole lot to a "fresh area", and if there isn't a suitable "fresh area" that may (I don't know for sure) cause it to extend the file - so that the index can be contiguous.Index Defrag doesn't do the copy-thing, it just redistributes the entries so that the index pages are balanced, in-situ. Hence my suggestion that that may put less pressure on file size requirements.If you don't need to reclaim the slack space for some other purpose on that disk I wouldn't bother - the first time it is needed again (e.g. for a one-time data operation) and the file is re-extended then fragmentation will occur, and every time you shrink and then the file regrows fragmentation will potentially get worse.If you have done a large one-time purge of old data, or someone accidentally ran a task that caused huge expansion of the file, then a shrink back to "normal size" (i.e. bigger than "minimum size") would be in order, but I wouldn't do it otherwise - particularly on a file that large where performance is presumably important.The only downsides of having an overly large file are:1) Its consuming disk space that could be made available to something else. If you have enough disk space then this isn't important.2) A restore-from-backup will pre-create a file of the original size. If you want to move the database to another, smaller, machine that could be an issue. (If so shrink it as part of the move-to-other-machine process, not as a "routine" maintenance task)You can check how much free space is available, including per table/index, which may reveal fragmentation levels that need sorting out - e.g. if you have tables with NO clustered indexes which are not getting defraged by your current index rebuild routines |
|
|
|
|
|
|
|