Author |
Topic |
maramor
Starting Member
18 Posts |
Posted - 2009-03-01 : 12:00:41
|
           Here is the issue.I have a 350gb Database, deleted some data now I am using 80gb of this 350gb. Now I want to shrink...NO DONT TELL ME NOT TO SHRINK.... I UNDERSTAND THAT. WE RELOCATED THE DATA IT WILL NOT BE GROWING.NO DONT TELL ME ABOUT LDF SHRINK. ITS THE MDF.YES I RAN THE COMMANDS CORRECTLY....As you can see I am getting really frustrated. Dont take it personal. I have search and searched and cant find anything.I have 65,000 tables. This is a PeopleSoft database. Running SQL 2005 Std x64. The database was on a 2000 sql server at one time and we upgraded to 2005 and since i have a copy on this server for shrink testing. failed on original machine as well.my theory is data or deleted data is stuck at the 235ish area. dont know how to find it,delete it, relocate it... I have added a NDF all but 20ish gb relocated. took 26 hours to complete.I tried to generate scripts for all tables but it hangs at determining objects. I can select small batches but again i have 65000 tables...I tried SSIS to export failes or hangs.We have deleted all indexes, all views, just short of deleting all tables which does not solve the issue as I then lose the data.WHAT ON GODS EARTH DO I NEED TO DO. WHAT DO YOU WANT TO SEE.           Thanks,Matt |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2009-03-01 : 13:08:58
|
What's not working? You say what you've tried, but not why you're trying strange things like import/export or dropping indexes.Shrinking the data file requires just one command. It doesn't matter where in the file the data is, SQL will relocate it (which is one of the reasons we say not to shrink regularly, it fragments indexes while moving data earlier in the file)DBCC SHRINKFILE(<name of primary data file>, 90) should shrink the primary data file to 90GB.If that doesn't work, explain what has happened (error, wrong size, hung, etc)--Gail ShawSQL Server MVP |
 |
|
maramor
Starting Member
18 Posts |
Posted - 2009-03-01 : 19:53:04
|
I did explain what was not working or at least attempted... Shrink. It stops at 235gb as stated above. I understand that it normally is a bad thing and that fragmentation occures. As I attempted to explain, I have a 350gb database which after data being removed is about 80gb. It is not going to grow for a long long time if at all. When we shrink it stops at 235gb.Sorry, frustration got the better of me. I have tried the normal shrink and other various other things listed in last post. After every attempt I reran a Shrink for testing. It gets to a magic 235gb and stops everytime. I have tried everything I can think of.Recent news, my manager brought in some help. We were able to export all the data out via bcp, Truncate, and Shrink in about 3 hours. We are in the process of importing the data back in now.Now, can anyone explain why? How would I go about finding what is at 235gb that is preventing the shrink? I read about running index defrag but what about table defrag? Is it better to defrag the index or just rebuild all of them every night? We removed indexes, views and everything but the tables, why would export and truncate work?Thanks,Matt |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2009-03-02 : 03:15:07
|
quote: Originally posted by maramor I did explain what was not working or at least attempted... Shrink. It stops at 235gb as stated above.
As it it hangs? As in it fails with an error? As in it shrinks but not to the size you want?Please remember I can't see your server, I don't know what you've done and I can't read your mind.quote: As I attempted to explain, I have a 350gb database which after data being removed is about 80gb. It is not going to grow for a long long time if at all. When we shrink it stops at 235gb.
I don't suppose you have the output of sp_spaceused run on the database?Without any evidence or other info, I'd guess that some pages were locked and shrink couldn't move them.quote: Sorry, frustration got the better of me.
Please don't take it out on us, we are trying to help and yelling at people in your post is likely to make them ignore you.quote: Now, can anyone explain why? How would I go about finding what is at 235gb that is preventing the shrink?
I doubt you can now. Anything that would have given info has been lost with the export/shrink/import.quote: I read about running index defrag but what about table defrag?
Indexes get defragmented, not tables.If a table has a clustered index, rebuild that and essentially you rebuild the entire table. If the table is a heap, it's an unordered set of data and, with no order, there's no meaning to fragmentation.quote: Is it better to defrag the index or just rebuild all of them every night?
The usual rule of thumb is rebuild if fragmentation's above 20%, reorg if it's between 10 and 30. Nightly's probably overkill unless you have indexes that fragment horrendously. Weekly's good to start with.--Gail ShawSQL Server MVP |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
|
maramor
Starting Member
18 Posts |
Posted - 2009-03-02 : 21:18:06
|
To further explain, stops as in just stops. As if it completed it stops at 235gb. I assume data is stuck. Looking at the files it shows exacly what we hope to see, 80gb used and 200+ free. Even SSMS says something like 76% Free.I used a incremented shrink and it still stops at 235gb. Is there some method to find which set of data is the problem? As far as the database working with extract is mute point. I have a copy to work with for a few days more, if I can solve it faster then it's better for us. I would think that you could look and find that such table exists on this extent and find any gaps in said tables. Using this you should be able to map out 235gb in extents and pages and find that said table is around 235gb and then rebuild this index.I have 65,000 tables upteen billion indexes so to drop and rebiuld all is not going to happen. Infact the Generate Script hangs and never processes the requests unless I select a small subset which is find but I need to know who is the problem to generate scripts to rebuild.Plus we document for others, which from what I have been able to find it is not documented anywhere. I am not tring to be a pain, Thank you for your help. I will get you a sp_spaceused when I get back into the office.As of this point our plan is the extract/shrink/reimport. We have a shell created from our tests that we will use for the reimport. This shell has no data and holds all the tables and indexes and such. We will be performing a Pre-Production attempt in a test environment to memic the exact steps needed and benchmark the time it will take. Once this is complete I will lose any ability to review why this was so much trouble.Thanks,MattThanks,Matt |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2009-03-03 : 07:08:04
|
No errors? No errors in the error log? Are there any other people using the server? Are there any open transactions or locks held?There's no way to find what table is in that piece of the file without reading page by page (using an undocumented command) and manually looking up the object names. Even if you did find what table was at that point, I'm not sure it would help. A fragmented index will not stop a shrink.--Gail ShawSQL Server MVP |
 |
|
maramor
Starting Member
18 Posts |
Posted - 2009-03-03 : 20:35:32
|
I was unable to run the sp_spaceused today. No errors anywhere when running a shrink. It just does not go lower than 235gb. There is no one using the server, no transactions. It does this same result on every server and even from a fresh restore. Looks like we really don't have a clue as to what this exact issue is. What ever the problem is you have to truncate the data out of the tables in order to recover from this state. I would say as it was mentioned above perhaps this is a heap and or it is a table without a key or identifiable piece of data? I am just about giving up on this, if you have anything you would like me to provide let me know now as we plan to remove this database any day now. We will end up performing the bcp out and back in process which is a nightmare in a half riddled with risk.I would like to think everyone for putting up with my tude on this project. I will apologize now.Thanks,Matt |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2009-03-03 : 22:39:48
|
Have you run DBCC CHECKDB to verify that your database is not corrupt?CODO ERGO SUM |
 |
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2009-03-03 : 23:14:32
|
One other thing to check is your transaction log. If the log file is growing too large for the disk it's on, or if it reaches its maximum size, it won't finish shrinking. Although you should receive an error to that effect, i.e. Disk is full. |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2009-03-04 : 03:27:42
|
quote: Originally posted by maramor I was unable to run the sp_spaceused today.
Can you run it tomorrow? It would be really useful to see what that returns. Also check what Rob and Michael suggest.--Gail ShawSQL Server MVP |
 |
|
maramor
Starting Member
18 Posts |
Posted - 2009-03-10 : 00:23:07
|
Sorry for the delay we had production issue where Admin forgot to include daylight saving patch on several new servers. what a pain that was to get all figured out. we had to back track all systems to varify.... ARG!!! anyway we had no issues. YAY!!This is after data removed and at least one shrink attempted.database_name database_size unallocated spaceFNDEV 316134.81 MB 237319.63 MBreserved data index_size unused80441400 KB 51706728 KB 28619800 KB 114872 KBThanks,Matt |
 |
|
GilaMonster
Master Smack Fu Yak Hacker
4507 Posts |
Posted - 2009-03-10 : 04:12:59
|
Nothing in there it indicate a problem. If you have some downtime, could you run DBCC UPDATEUSAGE and then spaceused again, just to be 100% sure those values are correct.Are you sure there are no open transactions, no locks held? Check DBCC OPENTRAN and sys.dm_os_tran_locks--Gail ShawSQL Server MVP |
 |
|
maramor
Starting Member
18 Posts |
Posted - 2009-03-10 : 13:37:05
|
Did UPDATEUSAGE already. This is straight from a backup. This database was restored and Shrink processes performed. I even dismounted and added back with same results.Thanks,Matt |
 |
|
hope2help
Starting Member
4 Posts |
Posted - 2009-03-12 : 07:31:48
|
I think I just finishing sorting out a similar problem.Using SQL Internals Viewer [url]http://www.sqlinternalsviewer.com/[/url] I determined the table stuck at the end of the file.Otherwise you could try using "Metadata: ObjectId" from:dbcc traceon(3604)dbcc page({dbName}, {fileId}, {usedPages})dbcc traceoff(3604)>> using dbcc showfilestats to get UsedExtents (x8 for usedPages)I bcp'd this table out, dropped it, shrunk file, then recreated table.For me it was only the single table causing the problem. |
 |
|
hope2help
Starting Member
4 Posts |
Posted - 2009-03-12 : 07:43:00
|
Sorry just realised you are going to try the dbcc page option you will want to use TotalExtents not UsedExtents.May need to take off a couple of pages until you find an object.Didn't go this way myself, cheated used the mentioned visual app.Thought of this later, so not proven. |
 |
|
maramor
Starting Member
18 Posts |
Posted - 2009-03-12 : 09:21:18
|
dbcc showfilestatsFileid FileGroup TotalExtents UsedExtents Name FileName1 1 3774011 279 FNDEV89TE_Data U:\FNDEV89TE_Data.MDF3 1 1280000 1258570 FNDEV89TE_NEW_Data T:\FNDEV89TE_NEW_Data.ndf--3774011 * 8 = 30192088--1280000 * 8 = 10240000dbcc traceon(3604)dbcc page(FNDEV89TE, 1, 30192088)dbcc traceon(3604)Msg 8968, Level 16, State 1, Line 1Table error: DBCC PAGE page (1:30192088) (object ID 0, index ID 0, partition ID 0, alloc unit ID 0 (type Unknown)) is out of the range of this database.DBCC execution completed. If DBCC printed error messages, contact your system administrator.Merging back to 1 file... will try again.checking SqlInternalsviewer... looks really good. Only issue I am having at the moment is slowness because of size.I will let you know shortly.Thanks,Matt |
 |
|
hope2help
Starting Member
4 Posts |
Posted - 2009-03-12 : 20:01:13
|
Sorry, shouldn't have posted the unproven method. The error is due to pages starting from 0.The concept is assuming that after an attempted shrink, the problematic object will be right at the end of the allocated extents.Anyway try...set nocount ondeclare @lastPage bigint, @inspectPage bigintdeclare @fileStats table ( Fileid int, [FileGroup] int, TotalExtents bigint, UsedExtents bigint, [Name] sysname, [FileName] varchar(1024) )insert into @fileStatsexec('dbcc showfilestats with no_infomsgs')select @lastPage = TotalExtents * 8 - 1 from @fileStats -- page id is 0 basedselect @inspectPage = @lastPage - 7 -- look at first page in the last allocated extent-- can move backwards from here in quanitities of 8, so look at first page in extents, until find an objectdbcc traceon(3604) with no_infomsgsdbcc page('ISMIS_Laptop', 1, @lastPage) with no_infomsgsdbcc traceoff(3604) with no_infomsgs |
 |
|
hope2help
Starting Member
4 Posts |
Posted - 2009-03-12 : 22:42:55
|
I think my poor testing skills are evident here. This line should have referenced @inspectPage not @lastPage.dbcc page('FNDEV89TE', 1, @inspectPage) with no_infomsgs |
 |
|
maramor
Starting Member
18 Posts |
Posted - 2009-03-13 : 14:18:27
|
Found the table, first test worked. Now I have to tell my boss that the 3 weeks of bcp export/import testing might be a wash.Thank you very much.Thanks,Matt |
 |
|
|