Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Log Shipping -Shrinking Log and Data files

Author  Topic 

magictech
Starting Member

44 Posts

Posted - 2004-12-21 : 15:50:57
I'm having problem managing log and data file sizes while log shipping is configured and running properly. I have a re-indexing job that runs weekly and after this jobs runs, the log and data file sizes of the database that's being log-shipped increases by an additional 40Gig each. I would like to be able to shrink the data and log files of the primary database to their normal sizes without breaking log shipping. I realize the process of shrinking data and log files will automatically truncate the transaction log, which will break log shipping. Is there a way to shrink these files without breaking log shipping? Any information would be greatly appreciated.

Regards

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-21 : 16:26:51
Shrinking the database does not truncate the tlog unless you specify it to. What command are you running?

And you should NOT be shrinking them if the reindexing job is just going to need it again. Your database is taking a huge performance hit during the shrink and then again when it needs to expand. It is recommended to add disk space to support the needed file sizes for the reindexing job.

Tara
Go to Top of Page

magictech
Starting Member

44 Posts

Posted - 2004-12-21 : 17:22:58
Thanks for responding. I'm using the script below to shrink the data file, which I believe truncates the log file in the process. I would like to be able to shrink log and data files without breaking log shipping. Truncating the log will break log shipping.

SET NOCOUNT ON
DECLARE @LogicalFileName sysname,
@MaxMinutes INT,
@NewSize INT

-- *** MAKE SURE TO CHANGE THE NEXT 4 LINES WITH YOUR CRITERIA. ***
USE [Staging] -- This is the name of the database
-- for which the log will be shrunk.
SELECT @LogicalFileName = 'Staging2_Log', -- Use sp_helpfile to
-- identify the logical file
-- name that you want to shrink.
@MaxMinutes = 10, -- Limit on time allowed to wrap log.
@NewSize = 200 -- in MB

-- Setup / initialize
DECLARE @OriginalSize int
SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName
SELECT 'Original Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName

CREATE TABLE DummyTrans
(DummyColumn char (8000) not null)

-- Wrap log and truncate it.
DECLARE @Counter INT,
@StartTime DATETIME,
@TruncLog VARCHAR(255)
SELECT @StartTime = GETDATE(),
@TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'
-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)

EXEC (@TruncLog)

-- Wrap the log if necessary.
WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired
AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk
AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.
BEGIN -- Outer loop.
SELECT @Counter = 0
WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))
BEGIN -- update
INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
DELETE DummyTrans
SELECT @Counter = @Counter + 1
END -- update
EXEC (@TruncLog) -- See if a trunc of the log shrinks it.
END -- outer loop
SELECT 'Final Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
DROP TABLE DummyTrans
PRINT '*** Perform a full database backup ***'
SET NOCOUNT OFF
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-21 : 17:25:36
I take it that you did not write this code. This part of the code is creating the query to do the truncate:

@TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'

But why do you want to shrink it if you are just going to receive a performance penalty when it needs to expand? !!!

Tara
Go to Top of Page

Hippi
Yak Posting Veteran

63 Posts

Posted - 2004-12-21 : 23:01:42
[quote]Originally posted by tduggan

I take it that you did write this code. This part of the code is creating the query to do the truncate:

@TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'

But why do you want to shrink it if you are just going to receive a performance penalty when it needs to expand? !!!

Tara
[/quotte]
Tara,
So what will u do if the transaction log is too big?
In my experience, I make a full backup, detach the database, and reattach after deleting the big log file.
And I have a plan to do full backup everyday & transaction log every two hours.
Do u think it's good enough?
Thank a bunch.
Go to Top of Page

hgorijal
Constraint Violating Yak Guru

277 Posts

Posted - 2004-12-22 : 01:36:44
If the transaction log is growing too big constantly, you simply increase the frequency of log backups (or log shipping). It's more than ok to do it even every 15 minutes, if required.

Detaching, attaching etc is not a viable production procedure. It simply increases the downtime, disconnects all the users and could eventually ..... get you fired.

On an occational, increase in log file size., simply shrink the log file after the next log backup.

Hemanth Gorijala
Exchange a Dollar, we still have ONE each._______Exchange an Idea, we both have TWO each.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-12-22 : 04:16:22
I have a re-indexing job that runs weekly and after this jobs runs, the log ... increases by an additional 40Gig

How about looking at re-indexeing less agressively?

We fiddled around with a threshold for re-indexing - starting at 95% and working down until we saw a change in performance. This reduced the number of tables which needed reindexing. We also changed our maintenance routine to use DEFRAG, instead of REINDEX, on tables that had more than 10,000 pages [I think] to DEFRAG instead of REINDEX. (But I'm not sure if that made a difference to the Log size).

Does reindexing a clustered index cause the other indexes on the table to be reindexed? If so then also reindexing those will boost the log file size too (i.e. if you are looping round all tables & indexes).

What about REBUILD STATS - does that take much log space? Again, I think there is no point doing those after a REINDEX (although is a Reindex the equivalent of a FULL SCAN Rebuild Stats?)

Kristen
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-22 : 12:22:45
Completely agree with hgorijal.

Kristen, no the reindex is not the equivalent of a full scan with update stats.

MS has said that reindexing isn't necessary on small scale systems and that indexdefrag can be done on those systems instead. And indexdefrag doesn't use nearly as much tlog space as dbreindex. Oh and btw, MS said small scale systems are 10GB and under. I thought that was strange of them to say. But then again there test systems are a few terabytes.

Tara
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-12-22 : 13:11:18
"[i]no the reindex is not the equivalent of a full scan with update stats[i]"

So reindex does the equivalent of an UPDATE STATS with a partial/sample scan?, or NO Update Stats at all?

Do I need to do REINDEX on indexes if I have already done ReIndex on a clustered index on that table? (does the reindex of the clustered index force recreation of all other indexes on that table?)

Thanks

Kristen
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-22 : 13:15:36
Reindexing doesn't do statistics. They are separate operations. IMO, update statistics should be run more often than dbreindex. Given the article that I read about dbreindex vs. indexdefrag, MS agrees too. I was doing dbreindex once a day, now I'm down to once a week. I do update stats once per day though. I also have the auto update stats option turned on the database as recommended by MS.

BOL does not state that reindexing a clustered index forces recreation of the other indexes on the table. But I seem to recall someone making that claim here. I'm going to do some digging on this.

Tara
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-22 : 13:18:33
Found it on Brad's site:

quote:


If you do a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.




And Brad's a god when it comes to SQL Server performance. MS even referenced his web site once in a seminar I went to. Someone was asking about performance, and the MS guy referred him to Brad's site for the details.

Tara
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-12-22 : 13:22:35
Hehe! Cheers Tara, that'll reduce Log space and wasted CPU cycles a fair bit.

Enterprise / Platinum Web Maintenance Tools Version coming up - a Must Purchase option for our existing customers during this festive season!

Krsiten
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-22 : 13:26:54
I think I'll be changing my dbreindex stored proc for this. It rebuilds all indexes currently. I need to adjust it a little so that on the tables that have a clustered index (which almost all should anyway), don't bother with the nonclustered ones. Not sure why I didn't consider this before.

Tara
Go to Top of Page

hgorijal
Constraint Violating Yak Guru

277 Posts

Posted - 2004-12-23 : 02:20:40
Kristen/Tara,
Rebuilding all indexes in case of reindex on clustered was a bug in SQL 2000 which is fixed in SP2. All indexes are rebuilt only if the clustered index is NOT "unique".

Here is a KB article on the bug....

http://support.microsoft.com/default.aspx?scid=kb;en-us;304519

So, unless your clustered indexes are not unique, you do not have to change your code.

cheers...

Hemanth Gorijala
Exchange a Dollar, we still have ONE each._______Exchange an Idea, we both have TWO each.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-23 : 12:17:00
I am so confused by that knowledge base article. So do the nonclustered indexes get rebuilt if you reindex a clustered index that is unique? And what about if it isn't unique?

Tara
Go to Top of Page

hgorijal
Constraint Violating Yak Guru

277 Posts

Posted - 2004-12-24 : 01:20:33
No; it's the other way around.
NonClustered indexes get rebuilt if you reindex a clustered index that is NOT unique.

Here is the deal... When a clustered index is initially created, all the non-clustered indexes are rebuilt to hold the "clustering key value" of the clustered index, instead of the RID(Row Identifier). Note that, in this case all the non-cluster indexes are not pointing to the phisical location of the data, but are holding only the corresponding clustering key value. This is the key factor for re-indexing behavior.

When a "UNIQUE" clustered index is rebuilt, the "clustering key value" is not changing and so, even if the data pages have changed, it will not effect the non-clustered index as they are refering the datapages anymore.

But, when a "NON-UNIQUE" clustered index is created, a 8-byte unique identifier is added to non-unique key for each duplicate row, making it a unique "clusturing key value". When this index is rebuilt, this 8-byte identifier is updated, if necessary. This causes the "clustering key value" to change, and there-by effecting the non-clustered indexes; resulting in their rebuild.

In short...

When you Rebuild a...
Unique Clustered Index -- only that index is rebuilt.
Non-Unique Clustered Index -- ALL indexes are rebuilt.




Hemanth Gorijala
Exchange a Dollar, we still have ONE each._______Exchange an Idea, we both have TWO each.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2004-12-30 : 13:17:19
Thanks Hemanth! Here's my updated sproc:



SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO












----------------------------------------------------------------------------------------------------
-- OBJECT NAME : isp_DBCC_DBREINDEX
--
-- AUTHOR : Tara Duggan
-- DATE : May 11, 2004
--
-- INPUTS : @dbName - name of the database
-- OUTPUTS : None
-- DEPENDENCIES : None
--
-- DESCRIPTION : This stored procedure runs DBCC DBREINDEX for each of the indexes in the database.
--
-- EXAMPLES (optional) : EXEC isp_DBCC_DBREINDEX @dbName = 'GT'
--
-- MODIFICATION HISTORY :
----------------------------------------------------------------------------------------------------
-- 12/22/2004 - Tara Duggan
-- If table has a clustered index, just rebuild that index; otherwise rebuild all.
--
-- 12/30/2004 - Tara Duggan
-- If table has a non-unique clustered index, just rebuild that index; otherwise rebuild all.
----------------------------------------------------------------------------------------------------
CREATE PROC isp_DBCC_DBREINDEX
(@dbName SYSNAME)
AS
SET NOCOUNT ON

DECLARE @objName SYSNAME
DECLARE @idxName SYSNAME
DECLARE @SQL NVARCHAR(4000)
DECLARE @ID INT
DECLARE @RowCnt INT

CREATE TABLE ##Indexes
(
Indexes_ID INT IDENTITY(1, 1) NOT NULL,
IndexName SYSNAME NOT NULL,
ObjectName SYSNAME NOT NULL
)

-- non-unique clustered indexes
SET @SQL = ''
SET @SQL = @SQL + 'INSERT INTO ##Indexes (IndexName, ObjectName) '
SET @SQL = @SQL + 'SELECT i.[name], o1.[name] '
SET @SQL = @SQL + 'FROM ' + @dbName + '.dbo.sysindexes i '
SET @SQL = @SQL + 'INNER JOIN ' + @dbName + '.dbo.sysobjects o1 '
SET @SQL = @SQL + 'ON i.[id] = o1.[id] '
SET @SQL = @SQL + 'INNER JOIN ' + @dbName + '.dbo.sysobjects o2 '
SET @SQL = @SQL + 'ON i.[name] = o2.[name] '
SET @SQL = @SQL + 'WHERE i.indid = 1 AND '
SET @SQL = @SQL + 'o1.type = ''U'' AND '
SET @SQL = @SQL + 'OBJECTPROPERTY(o2.[id], ''IsUniqueCnst'') = 0 '

EXEC sp_executesql @statement = @SQL

-- nonclustered indexes but only tables that do not have non-unique clustered indexes; unique clustered indexes
SET @SQL = ''
SET @SQL = @SQL + 'INSERT INTO ##Indexes (IndexName, ObjectName) '
SET @SQL = @SQL + 'SELECT i.[name], o.[name] '
SET @SQL = @SQL + 'FROM ' + @dbName + '.dbo.sysindexes i '
SET @SQL = @SQL + 'INNER JOIN ' + @dbName + '.dbo.sysobjects o '
SET @SQL = @SQL + 'ON i.[id] = o.[id] '
SET @SQL = @SQL + 'WHERE i.indid > 1 AND i.indid < 255 AND '
SET @SQL = @SQL + 'o.type = ''U'' AND '
SET @SQL = @SQL + '(i.status & (64 | 8388608)) <= 0 AND '
SET @SQL = @SQL + 'o.[name] NOT IN (SELECT ObjectName FROM ##Indexes)'

EXEC sp_executesql @statement = @SQL

SELECT TOP 1 @ID = Indexes_ID, @idxName = IndexName, @objName = ObjectName
FROM ##Indexes
ORDER BY Indexes_ID

SET @RowCnt = @@ROWCOUNT

WHILE @RowCnt <> 0
BEGIN

SET @SQL = 'DBCC DBREINDEX(''' + @dbName + '.dbo.' + @objName + ''', ' + @idxName + ', 0) WITH NO_INFOMSGS'

EXEC sp_executesql @statement = @SQL

SELECT TOP 1 @ID = Indexes_ID, @idxName = IndexName, @objName = ObjectName
FROM ##Indexes
WHERE Indexes_ID > @ID
ORDER BY Indexes_ID

SET @RowCnt = @@ROWCOUNT

END

DROP TABLE ##Indexes

RETURN 0













GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO



I'll be updating my blog shortly.

Tara
Go to Top of Page

hgorijal
Constraint Violating Yak Guru

277 Posts

Posted - 2004-12-31 : 04:23:52
B-E-A-Utiful.
It feels good to see such good, straight forward, no-nonsense code with NO CURSORS. I wish my developers would ever learn to do that.

Hemanth Gorijala
Exchange a Dollar, we still have ONE each._______Exchange an Idea, we both have TWO each.
Go to Top of Page

dsweeney
Starting Member

3 Posts

Posted - 2005-12-28 : 18:05:30
back to the original question
is the only solution to reduce log size not re-index as agressively?

my problem is the same - we have a 70GB database trying to log ship to a remote site. everything works OK, until the weekly reindex - then the next log file is 35+ GB and takes over 3 days to transfer on a T1.

Anyway around this?
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2005-12-28 : 18:08:42
During the reindex, you can backup the tlog more frequently so that the tlog backup files are smaller.

You might want to consider running INDEXDEFRAG more often than DBREINDEX. Take a look at this article for details:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx

Tara Kizer
aka tduggan
Go to Top of Page

dsweeney
Starting Member

3 Posts

Posted - 2005-12-29 : 10:53:45
thanks for the link - I'll check the log fragmentation and see if we can use INDEXDEFRAG more often that DBREINDEX.

the second comment brings up another question (about trans logs during the reindex)
the trans logs fail during the reindex job. Is that by design or should they still be running (the databse takes several hours to reindex - like over 6 hours)?

still, doesn't sql 2000 commit all the changes at once? that would still have a net result of a large log file.
Go to Top of Page
    Next Page

- Advertisement -