Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Transaction Log & Recovery Model

Author  Topic 

scelamko
Constraint Violating Yak Guru

309 Posts

Posted - 2006-05-20 : 14:46:04
Guys,

I have stored procedure spdelete

create procedure spDelete (@id bigint) as
set nocount on
begin
begin transaction
DELETE FROM TABLEA WHERE KEY < @id
commit
begin transaction
DELETE FROM TABLEB WHERE KEY < @id
commit
.
.
set nocount off
end

When I execute this procedure it deletes all the data from the tables where id < ''. (it delete 6 million rows from 7 tables)

During this process my transaction is filled upto 41gb and also my tempdb data file size is also increased. I am concerned at my transaction log growth during the execution of procedure, however after the procedure execution is comnpleted the datafile and log file shrink file (the database is said to auto shrink). The database is currently set to simple recovery model.

Is there any way after each commit in the stored procedure I could ensure that data is flushed out of transaction log? instead of the database shrinking after the stored procedure is run.

Also should I change my recovery model from simple does it effect the way the data is flushed of transaction log??

Any suggestions and inputs would help

Thanks


nr
SQLTeam MVY

12543 Posts

Posted - 2006-05-20 : 14:58:14
your begin tran and commit doesn't do anything as every sql statement is run in an implicit transaction.
The only way to prevent log growth is to split up the delete e.g.
set rowcount 10000
select 1
while @@rowcount > 0
delete TABLEA WHERE KEY < @id
select 1
while @@rowcount > 0
delete TABLEB WHERE KEY < @id
set rowcount 0

This will allow the tr log to be freed during the deletes.

==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

nr
SQLTeam MVY

12543 Posts

Posted - 2006-05-20 : 15:02:00
And setting the log to autoshrink is usually a bad idea. Increasing the database size is resource intensive and it will continually happen in your scenario.
Leave it to grow and shrink it manually if it grows due to unusual circumstances.

==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

scelamko
Constraint Violating Yak Guru

309 Posts

Posted - 2006-05-22 : 09:04:30
nr thank you for your reply

I have to admit that I do not see any advantage in splitting the delete statements, since they still have to be
run as part of the same stored procedure.

I mean

set rowcount 10000
select 1
while @@rowcount > 0
delete TABLEA WHERE KEY < @id
select 1
while @@rowcount > 0
delete TABLEB WHERE KEY < @id
set rowcount 0

and

delete from tablea where key < @id

will generate the same amount of the log, but the question as to when the log space is freed up. I noticed that my log
space is freed up after the stored procedure has completed the execution.

May it is prudent that I split the DML statements to two stored procedures instead of splitting the delete statement
itself.

Any thoughts pls feel free to correct me if I am wrong in my premises

Thanks
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-05-22 : 09:12:32
"will generate the same amount of the log"

I don't think it will if your database recovery model is set to SIMPLE, OR your transaction log backup is frequent enough that it occurs during the task.

(but only if there is no "outer" BEGIN TRANSACTION / COMMIT block)

Either way it will use a lot less TEMPDB space

Kristen
Go to Top of Page

nr
SQLTeam MVY

12543 Posts

Posted - 2006-05-22 : 09:17:26
If the log entries are being freed it probably means you are using simple recovery.
That will allow the log to be freed between statements and it shouldn't matter if you put them in different SPs.
The one with the loop should allow the log to be freed between each statement.
It might be that it is too quick or uses too many resources for the log to clear - you could try putting in a delay occasioanally.

==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page
   

- Advertisement -