Author |
Topic |
influent
Constraint Violating Yak Guru
367 Posts |
Posted - 2008-07-01 : 16:10:33
|
I have a SQL 2000 database that consists of two data files and a log file. The data files total 12 GB and the log file is 8 GB, on the production server. I've created a script that will trim down the database by deleting 99% of the non-lookup data, so that the database is small enough to be put in source control and used by developers to run integration tests with function calls from the website code. My problem is that even though DBCC SHRINKFILE shrinks the log file wonderfully, it only gets the data files down to 8.5 GB total, and it's hard for me to believe that there's still that much data. Is there something else I can try? There are about 190 tables with mostly integer data and an average of probably 50 rows per table post deletion script. I don't feel as though I can trust what Mgmt Studio says is the amount of space available (0.04 MB). |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-07-01 : 16:25:06
|
"I have a SQL 2000 database that consists of two data files and a log file. The data files total 12 GB and the log file is 8 GB, on the production server. I've created a script that will trim down the database by deleting 99% of the non-lookup data"By shrinking Production DB, you are going to shuffle index pages and cause fragmentation. |
 |
|
influent
Constraint Violating Yak Guru
367 Posts |
Posted - 2008-07-01 : 16:37:27
|
Thanks Tara, I tried DBCC UPDATEUSAGE and it didn't help. When I create the database from scratch with no data it's only 5 MB, so something weird is going on. Any other ideas? |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
influent
Constraint Violating Yak Guru
367 Posts |
Posted - 2008-07-01 : 16:59:06
|
I'm not sure if it DBCC DBREINDEX ever gets run on the production server because the client controls that, but I hadn't run it on the copy I'm using for testing until just now. I ran the following and then ran SHRINKFILE again and it's down to 8.1 GB now, still a far cry from where I think it should be (less than 1 GB).DECLARE @TableName sysnameDECLARE cur_reindex CURSOR FORSELECT table_name FROM information_schema.tables WHERE table_type = 'base table'OPEN cur_reindexFETCH NEXT FROM cur_reindex INTO @TableNameWHILE @@FETCH_STATUS = 0BEGIN PRINT 'Reindexing ' + @TableName + ' table' DBCC DBREINDEX (@TableName, ' ', 100) FETCH NEXT FROM cur_reindex INTO @TableNameENDCLOSE cur_reindexDEALLOCATE cur_reindexGO |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2008-07-01 : 21:51:20
|
You can use this script to verify that you do not have any tables using a large amount of space.Script to analyze table space usagehttp://www.sqlteam.com/forums/topic.asp?TOPIC_ID=61762If you have no large tables, you may not be doing the shrinkfile correctly. Try using the script on the link below to shrink the datafile.Shrink DB File by Increment to Target Free Space:http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=80355One last thing to consider is any table that does not have a clustered index. If you have tables without clustered indexes, reindex will not be able to remove unused space. If you have tables with a small number of rows that the "analyze table space usage" script shows is using a large amount of space, that is a likely cause. The easiest way to fix that is to created a clustered index on the table.CODO ERGO SUM |
 |
|
steel
Posting Yak Master
107 Posts |
Posted - 2008-07-02 : 05:51:56
|
hi For shrinking the log file of the database you must use the prosedure as:step 1: On the qry analyser use the command: Backup log <name of the database> with no_log.step 2: open the enterprise manager then change the view of the database to taskpad and then go to shrink database then select files then select the log file and apply the last option and shrink it.For shrinking the data file first make sure that there are no active connections with the database and and if yes then stop all applications using the database and similarly shrink the datafile as log file is shrinked . also aplly shrinking to the data file in small intervals. |
 |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-07-02 : 08:13:36
|
quote: Originally posted by steel hi For shrinking the log file of the database you must use the prosedure as:step 1: On the qry analyser use the command: Backup log <name of the database> with no_log.step 2: open the enterprise manager then change the view of the database to taskpad and then go to shrink database then select files then select the log file and apply the last option and shrink it.For shrinking the data file first make sure that there are no active connections with the database and and if yes then stop all applications using the database and similarly shrink the datafile as log file is shrinked . also aplly shrinking to the data file in small intervals.
If you do this, you are breaking a Log chain so you won't be able to recover point in time. |
 |
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2008-07-03 : 23:44:05
|
Do full backup right after shrinking. |
 |
|
influent
Constraint Violating Yak Guru
367 Posts |
Posted - 2008-07-07 : 16:03:02
|
Firstly the log file doesn't matter for the integration testing version of the database that I'm trying to create (no need to restore), so I will just drop it and recreate it. Secondly, I deleted all rows from all tables to see what size the database would be and it was down to 12 MB, so clearly the data was the problem in one way or another. Before I deleted all the data I found one table that still had 7800 rows and no clustered index, but that doesn't seem like enough to cause 8 GB worth of data files, especially considering almost all the other tables had clustered indexes on them. |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2008-07-07 : 16:51:57
|
quote: Originally posted by influent...Before I deleted all the data I found one table that still had 7800 rows and no clustered index, but that doesn't seem like enough to cause 8 GB worth of data files, especially considering almost all the other tables had clustered indexes on them.
If you deleted all the data, then you should not have a table with 7800 rows. You might not think a table with only 7800 rows could use 8 GB, but I saw exactly the same thing last week with a table that had only three integer columns and only 180 rows using 4 GB of space. When I added a clustered index, the table shrank to the expected size.CODO ERGO SUM |
 |
|
influent
Constraint Violating Yak Guru
367 Posts |
Posted - 2008-07-07 : 19:05:04
|
Thanks to MVJ's "Script to analyze table space usage" I figured out that my two most important tables use an insane amount of space per row, so even though there are only around 150,000 rows in each, they add up to almost 6 GB total! They each have a bunch of varchar columns so I'm sure that plays a big part. The bigger of the two has a 9:5 ratio of index size to data, does that automatically mean something is wrong or does it just mean I have a lot of indexes (or neither)?BTW MJV, do I need to define the word "before" for you? ;) |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2008-07-07 : 19:53:27
|
quote: Originally posted by influent...BTW MJV, do I need to define the word "before" for you?...
No. Guess you don't need any more help.CODO ERGO SUM |
 |
|
|