SQL Server Forums
Profile | Register | Active Topics | Members | Search | Forum FAQ
 
Register Now and get your question answered!
Username:
Password:
Save Password
Forgot your Password?

 All Forums
 General SQL Server Forums
 New to SQL Server Programming
 Limit SQL Server log file growth rate
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

agismark
Starting Member

8 Posts

Posted - 01/15/2013 :  17:17:39  Show Profile  Reply with Quote
Hello,
I have a big SQL server database of 2.3 billion rows and size of 76gb
My problem is that i want to convert a column type to smalldatetime but during this operation .ldf file grows so big that takes my entire disk space (it got up to 350gb) and then query exits with error.
Is there any way to keep the .ldf small?

I shrinked my .ldf from options
Database recovery model is set to simple

thank you,
Agis

Edited by - agismark on 01/15/2013 17:37:20

James K
Flowing Fount of Yak Knowledge

3562 Posts

Posted - 01/15/2013 :  18:46:52  Show Profile  Reply with Quote
Take a look at this thread; people have recommended various approaches: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=182080

The one I favored (which nobody else seemed to support, for reasons hitherto unknown to me) is as follows:

a) Add a new column that is of smalldatetime type to the table.
b) Copy the data from the existing column to the new column. Be sure to do it in small chunks. Since the recovery model is set to simple, the log file growth will be limited by the largest chunk that you copy.
c) Drop the existing column
d) Rename the new column to the old column name.
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 01/15/2013 :  20:19:44  Show Profile  Reply with Quote
Thanks a lot James,
here is the code that seems to work for me :

WHILE (2 > 1)
BEGIN
BEGIN TRANSACTION
UPDATE TOP ( 10000 ) [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
IF @@ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
-- 1 second delay
WAITFOR DELAY '00:00:01'
END -- WHILE
GO
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 01/15/2013 :  22:45:21  Show Profile  Reply with Quote
This code didnt work in the end
i got error message:
Msg 9002, Level 17, State 4, Line 21
The transaction log for database 'ais' is full due to 'ACTIVE_TRANSACTION'.

quote:
Originally posted by agismark

Thanks a lot James,
here is the code that seems to work for me :

WHILE (2 > 1)
BEGIN
BEGIN TRANSACTION
UPDATE TOP ( 10000 ) [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
IF @@ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
-- 1 second delay
WAITFOR DELAY '00:00:01'
END -- WHILE
GO


Go to Top of Page

sodeep
Flowing Fount of Yak Knowledge

USA
7174 Posts

Posted - 01/15/2013 :  23:41:55  Show Profile  Reply with Quote
Is your recovery model simple? change it to simple and delete in chunks and you can truncate logs at each interval.
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 01/16/2013 :  00:25:34  Show Profile  Reply with Quote
quote:
Originally posted by sodeep

Is your recovery model simple? change it to simple and delete in chunks and you can truncate logs at each interval.



yes it is simple

i entered shrink command but i get error :
Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.
please check my code and advice a change on it
my code is:


declare @metritis int ;
set @metritis=5000000;
declare @msg varchar;



WHILE (2 > 1)
BEGIN
BEGIN TRANSACTION
UPDATE TOP ( 5000000 ) [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19))
IF @@ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
set @metritis = @metritis +5000000;
set @msg=cast(@metritis as varchar)
RAISERROR (@msg, 10, 1) WITH NOWAIT
-- 1 second delay
WAITFOR DELAY '00:00:01'
DBCC SHRINKDATABASE(ais, 10, TRUNCATEONLY)
END -- WHILE
GO
Go to Top of Page

James K
Flowing Fount of Yak Knowledge

3562 Posts

Posted - 01/16/2013 :  06:18:56  Show Profile  Reply with Quote
I see couple of problems with the way you have written the query.

1. What the error message said - you cannot shrink logfile inside a transaction. In fact, don't try to shrink it at all yet.
2. The Update statement will update the same 5000000 rows with every update, because you don't have a where close, or an order by clause. So it probably will take the first n rows based on the cluster key and update that repeatedly.

So what you probably need is something like this:

UPDATE  [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19)) 
WHERE YourBIgIntPKColumn <= 5000000;
GO

UPDATE  [ais].[dbo].[imis position report]
SET [time2] = convert(smalldatetime, left(date, 19)) 
WHERE YourBIgIntPKColumn BETWEEN 5000001 AND 10000000
GO
... etc


Also, initially at least, I would suggest that you run this manually for a few chunks, one chunk at a time and monitor the log file growth. You can monitor the log file size and the size used using this:

SELECT * FROM sys.dm_os_performance_counters
WHERE instance_name = 'ais'
AND counter_name IN ('Log File(s) Size (KB)','Log File(s) Used Size (KB)')
When the used size gets close to the file size issue a CHECKPOINT command. (Look up MSDN for checkpoint). After you issue the checkpoint, check the log file sizes again.

This should give you an idea of how to manage the log file growth. I never issue manual checkpoint commands - instead letting SQL Server engine, which is much much smarter than me manage it. In your case it may be required unless you want to wait for the automatic check points. When you do checkpoint, it can impact performance of other queries and other databases on the system. So test it to see how it behaves before you do it in production.
Go to Top of Page

agismark
Starting Member

8 Posts

Posted - 01/16/2013 :  21:01:04  Show Profile  Reply with Quote
thanks James ,
it seems that there was something wrong with the whole database so I created it again. The import will finish tomorrow and I will manipulate the data then
Go to Top of Page

jackv
Flowing Fount of Yak Knowledge

United Kingdom
2008 Posts

Posted - 01/17/2013 :  01:52:19  Show Profile  Visit jackv's Homepage  Reply with Quote
You could also add a second transaction log file on separate drive - run the queries - and then drop the second transaction log file.

Jack Vamvas
--------------------
http://www.sqlserver-dba.com
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
SQL Server Forums © 2000-2009 SQLTeam Publishing, LLC Go To Top Of Page
This page was generated in 0.06 seconds. Powered By: Snitz Forums 2000