SQL Server Forums
Profile | Register | Active Topics | Members | Search | Forum FAQ
 
Register Now and get your question answered!
Username:
Password:
Save Password
Forgot your Password?

 All Forums
 SQL Server 2008 Forums
 SQL Server Administration (2008)
 Problem with block fragmentation ?!
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

andyhro
Starting Member

Germany
8 Posts

Posted - 09/06/2012 :  09:03:21  Show Profile  Reply with Quote
Hello,

we have a table (see below)
with 40.000.000 records. The tables used 160 GB.

I deleted 20.000.000 records of the table.
The tables used still 160 GB!!!
After an REBUILD or REORGANIZE there are no changes!

I think the problem ist the blockfragmentation.
How can I reduce that?
[See DBCC SHOWCONTIG below]

Thanks in advance.

Andy

-----------------------------------------------
CREATE TABLE [dbo].[Dokument](
[id] [int] NOT NULL,
[erstellungAm] [datetime] NOT NULL,
[erstellungDurch] [char](8) NOT NULL,
[referenzedObjId] [int] NOT NULL,
[referenzedObjtype] [varchar](255) NOT NULL,
[dokVal] [varbinary](max) NULL,
[typ] [tinyint] NOT NULL,
[referenzedObjMan] [varchar](255) NOT NULL,
[archivTyp] [tinyint] NULL,
CONSTRAINT [PK_Dokument] PRIMARY KEY NONCLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]

--------------------------------------------------
CREATE CLUSTERED INDEX [dokument_refObj_idx] ON [dbo].[Dokument]
(
[referenzedObjId] ASC,
[referenzedObjtype] ASC,
[referenzedObjMan] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

andyhro
Starting Member

Germany
8 Posts

Posted - 09/06/2012 :  09:05:39  Show Profile  Reply with Quote
DBCC SHOWCONTIG scannt die 'Dokument'-Tabelle...
Tabelle: 'Dokument' (2105058535); Index-ID: 1, Datenbank-ID: 12
Die TABLE-Ebene wurde gescannt.
- Gescannte Seiten.............................: 265983
- Gescannte Blöcke..............................: 33259
- Blockwechsel..............................: 51870
- Seiten pro Block (Durchschnitt)......: 8.0
- Scandichte [Bester Wert:Tatsächlicher Wert].......: 64.10% [33248:51871]
- Logische Scanfragmentierung..................: 0.01%
- Blockscanfragmentierung...................: 6.18%
- Bytes frei pro Seite (Durchschnitt).....................: 768.0
- Mittlere Seitendichte (voll).....................: 90.51%


sp_spaceused 'Dokument'
name.......rows.........reserved...........data................unused
---------------------------------------------------------------
Dokument 22193445 160.626.200 KB 158.826.624 KB 733.728 KB



Edited by - andyhro on 09/06/2012 09:16:18
Go to Top of Page

andyhro
Starting Member

Germany
8 Posts

Posted - 09/10/2012 :  05:11:46  Show Profile  Reply with Quote
None has a tip?

Edited by - andyhro on 09/10/2012 05:12:32
Go to Top of Page

lionofdezert
Aged Yak Warrior

Pakistan
885 Posts

Posted - 09/10/2012 :  09:02:29  Show Profile  Visit lionofdezert's Homepage  Send lionofdezert a Yahoo! Message  Reply with Quote
andyhro, deleting records never, reduce database size, space once aquired by database can only be reclamied by using DBCC ShrinkDatabase, But never forget to rebuild all indexes after ShrinkDatabase.

--------------------------
http://connectsql.blogspot.com/
Go to Top of Page

andyhro
Starting Member

Germany
8 Posts

Posted - 09/12/2012 :  08:18:14  Show Profile  Reply with Quote
Hallo lionofdezert,

I know we I use "DBCC ShrinkDatabase".
I do not shrink the DB!
This does not work if the table does not release the memory!
The question is, why keep my table DOCUMENT memory?

Below is an example.
After deleting records on table TEST is this frees memory!

The table Dokument reserved the memory still!
Why?





BEVOR DELETE
name rows reserved data index_size
-------------------------- ----------- ------------------ ------------------ ------------------
TEST ... 1.790.332 69.440 KB 69.192 KB 272 KB


AFTER DELETE

name rows reserved data index_size
-------------------------- ----------- ------------------ ------------------ ------------------
TEST ... 1.013.694 39.936 KB 39.528 KB 208 KB



Edited by - andyhro on 09/12/2012 08:19:13
Go to Top of Page

jeffw8713
Aged Yak Warrior

USA
769 Posts

Posted - 09/12/2012 :  13:40:44  Show Profile  Reply with Quote
This is because of the varbinary(max) column. That column is stored differently - and the allocated pages are not recovered when you delete the rows from the table.

You should be able to run DBCC CLEANTABLE on that table and recover the space. If not, your only option will be to build a new table and copy the data over.
Go to Top of Page

andyhro
Starting Member

Germany
8 Posts

Posted - 10/10/2012 :  02:21:07  Show Profile  Reply with Quote
Thank you.
I have tried this without success.

The only way is ==> NEW CREATION THE TABLE ?
Unfortunately I can't as easily recreate the table.
There is not enough space on disc and I have not a offline time of many hours to data-copy.

It's very pity that there is probably no solution.
Go to Top of Page

jeffw8713
Aged Yak Warrior

USA
769 Posts

Posted - 10/11/2012 :  14:13:00  Show Profile  Reply with Quote
Another option might be:

1) Create new column
2) Copy data from old column to new column
3) Delete old column
4) Run DBCC CLEANTABLE

No guarantees though.
Go to Top of Page

robvolk
Most Valuable Yak

USA
15635 Posts

Posted - 10/11/2012 :  15:18:35  Show Profile  Visit robvolk's Homepage  Reply with Quote
A few observations:

1. DBCC CLEANTABLE only recovers space if you've dropped columns, not deleted rows.
2. Did you perform ALTER INDEX [dokument_refObj_idx] ON [dbo].[Dokument] REORGANIZE WITH (LOB_COMPACTION = ON) ? That will recover space from deleted rows and consolidate the varbinary(max) columns. Based on the extent fragementation I don't believe this was done.
3. Why is your clustered index so wide? You have an integer and 2 varchar(255) columns. Can you use the ID column instead (make your primary key clustered)?
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
SQL Server Forums © 2000-2009 SQLTeam Publishing, LLC Go To Top Of Page
This page was generated in 0.22 seconds. Powered By: Snitz Forums 2000