Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 SQL Server Administration (2008)
 Problem with block fragmentation ?!

Author  Topic 

andyhro
Starting Member

8 Posts

Posted - 2012-09-06 : 09:03:21
Hello,

we have a table (see below)
with 40.000.000 records. The tables used 160 GB.

I deleted 20.000.000 records of the table.
The tables used still 160 GB!!!
After an REBUILD or REORGANIZE there are no changes!

I think the problem ist the blockfragmentation.
How can I reduce that?
[See DBCC SHOWCONTIG below]

Thanks in advance.

Andy

-----------------------------------------------
CREATE TABLE [dbo].[Dokument](
[id] [int] NOT NULL,
[erstellungAm] [datetime] NOT NULL,
[erstellungDurch] [char](8) NOT NULL,
[referenzedObjId] [int] NOT NULL,
[referenzedObjtype] [varchar](255) NOT NULL,
[dokVal] [varbinary](max) NULL,
[typ] [tinyint] NOT NULL,
[referenzedObjMan] [varchar](255) NOT NULL,
[archivTyp] [tinyint] NULL,
CONSTRAINT [PK_Dokument] PRIMARY KEY NONCLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]

--------------------------------------------------
CREATE CLUSTERED INDEX [dokument_refObj_idx] ON [dbo].[Dokument]
(
[referenzedObjId] ASC,
[referenzedObjtype] ASC,
[referenzedObjMan] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

andyhro
Starting Member

8 Posts

Posted - 2012-09-06 : 09:05:39
DBCC SHOWCONTIG scannt die 'Dokument'-Tabelle...
Tabelle: 'Dokument' (2105058535); Index-ID: 1, Datenbank-ID: 12
Die TABLE-Ebene wurde gescannt.
- Gescannte Seiten.............................: 265983
- Gescannte Blöcke..............................: 33259
- Blockwechsel..............................: 51870
- Seiten pro Block (Durchschnitt)......: 8.0
- Scandichte [Bester Wert:Tatsächlicher Wert].......: 64.10% [33248:51871]
- Logische Scanfragmentierung..................: 0.01%
- Blockscanfragmentierung...................: 6.18%
- Bytes frei pro Seite (Durchschnitt).....................: 768.0
- Mittlere Seitendichte (voll).....................: 90.51%


sp_spaceused 'Dokument'
name.......rows.........reserved...........data................unused
---------------------------------------------------------------
Dokument 22193445 160.626.200 KB 158.826.624 KB 733.728 KB


Go to Top of Page

andyhro
Starting Member

8 Posts

Posted - 2012-09-10 : 05:11:46
None has a tip?
Go to Top of Page

lionofdezert
Aged Yak Warrior

885 Posts

Posted - 2012-09-10 : 09:02:29
andyhro, deleting records never, reduce database size, space once aquired by database can only be reclamied by using DBCC ShrinkDatabase, But never forget to rebuild all indexes after ShrinkDatabase.

--------------------------
http://connectsql.blogspot.com/
Go to Top of Page

andyhro
Starting Member

8 Posts

Posted - 2012-09-12 : 08:18:14
Hallo lionofdezert,

I know we I use "DBCC ShrinkDatabase".
I do not shrink the DB!
This does not work if the table does not release the memory!
The question is, why keep my table DOCUMENT memory?

Below is an example.
After deleting records on table TEST is this frees memory!

The table Dokument reserved the memory still!
Why?





BEVOR DELETE
name rows reserved data index_size
-------------------------- ----------- ------------------ ------------------ ------------------
TEST ... 1.790.332 69.440 KB 69.192 KB 272 KB


AFTER DELETE

name rows reserved data index_size
-------------------------- ----------- ------------------ ------------------ ------------------
TEST ... 1.013.694 39.936 KB 39.528 KB 208 KB


Go to Top of Page

jeffw8713
Aged Yak Warrior

819 Posts

Posted - 2012-09-12 : 13:40:44
This is because of the varbinary(max) column. That column is stored differently - and the allocated pages are not recovered when you delete the rows from the table.

You should be able to run DBCC CLEANTABLE on that table and recover the space. If not, your only option will be to build a new table and copy the data over.
Go to Top of Page

andyhro
Starting Member

8 Posts

Posted - 2012-10-10 : 02:21:07
Thank you.
I have tried this without success.

The only way is ==> NEW CREATION THE TABLE ?
Unfortunately I can't as easily recreate the table.
There is not enough space on disc and I have not a offline time of many hours to data-copy.

It's very pity that there is probably no solution.
Go to Top of Page

jeffw8713
Aged Yak Warrior

819 Posts

Posted - 2012-10-11 : 14:13:00
Another option might be:

1) Create new column
2) Copy data from old column to new column
3) Delete old column
4) Run DBCC CLEANTABLE

No guarantees though.
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2012-10-11 : 15:18:35
A few observations:

1. DBCC CLEANTABLE only recovers space if you've dropped columns, not deleted rows.
2. Did you perform ALTER INDEX [dokument_refObj_idx] ON [dbo].[Dokument] REORGANIZE WITH (LOB_COMPACTION = ON) ? That will recover space from deleted rows and consolidate the varbinary(max) columns. Based on the extent fragementation I don't believe this was done.
3. Why is your clustered index so wide? You have an integer and 2 varchar(255) columns. Can you use the ID column instead (make your primary key clustered)?
Go to Top of Page
   

- Advertisement -