Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 New to SQL Server Programming
 Commit Frequency Vs DBCC SHRINKLOG FILE

Author  Topic 

Rahul Raj
Starting Member

41 Posts

Posted - 2014-11-05 : 23:08:20
Hi All,

I am new to SQL server and currently working on the archival of record from sql server. The records to be deleted are around 10 Lakhs thereby I was thinking to use the DBCC SHRINKFILE functionality to free the inactive virtual logs so that the stored procudure does'nt fail inbetween the processing due to insufficient space in the log file.

I am refering to an existing stored proc in which the logic is as follows:
- check the size of the shrinkfile (i believe it will give us the inacvite virtual log size which can be freed once the SHRINKFILE is executed in the later code)
- define cursor to read column(based on which all records from other tables will be deleted) from a table
- fetch the column
- delete from 10 tables based on the column fetched
- commit
- check if 1000 loops are complete . If yes - shrink the logs
- fetch the next column value and goto delete step

My question is that instead of commiting the records after each loop can we increase the commit frequency so that the records are commited after 100 loops or 1000 loops. Will it save me from the log file getting full and I can completely eliminate the shrinkfile logic.

Also, as the size of the log file (virtual logs) is calculated before the loop which will be executed once 1000 loops of delete are complete. Do I need to again calculate the log size for the second shrink as presently it is calculated only once before the loop.
Will it be useful as we have already deleted the inactive virtual logs, the first time.

Thanks for your help. Will appreciate an early response.

James K
Master Smack Fu Yak Hacker

3873 Posts

Posted - 2014-11-06 : 08:28:17
quote:
I am new to SQL server and currently working on the archival of record from sql server. The records to be deleted are around 10 Lakhs thereby I was thinking to use the DBCC SHRINKFILE functionality to free the inactive virtual logs so that the stored procudure does'nt fail inbetween the processing due to insufficient space in the log file.
.......
....... etc


I must admit that most of what you said does not make sense to me. I am assuming that your database recovery model is either full or bulk logged. If I am wrong and the database is using simple recovery model, you don't need to do anything special - just delete in small batches, and if you like to, issue checkpoint commands often.

If you are in full or bulk recovery, delete in small batches, and take frequent log backups. Taking log backups will clear virtual logs so they can be reused. The ONLY way to clear log files is to take log backups. SQL Server will not shrink/release log files that are not cleared.
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-11-09 : 23:18:24
HI James,

Many Thanks for the response.

I will check on the recovery model with the DBA or is there any way I can check the same.

As you have suggested checkpoints frequently in case of simple recovery mode. So does it mean that if the checkpoint is issued, the virtual logs will become inactive and Later when I use the DBCC SHRINKFILE command, space will be released.

In the current SP logic, I do not see any CHECKPOINT specified, will it be internally calculated.
COMMIT is performed after all tables inside a loop(one iteration) are deleted(based on a key) and then a SHRINKFILE is performed after 1000 such DELETE loops; which clears up space from the database log file.
Will Space problem be avoided, if COMMIT is performed after 100 or any feasible count of the loop instead of every loop.

Thanks in Advance!
Go to Top of Page

jeffw8713
Aged Yak Warrior

819 Posts

Posted - 2014-11-10 : 13:46:02
DBCC SHRINKFILE does not free up space in the log file. It shrinks (makes smaller) the log file - causing the file to grow again when you need more log space. To avoid filling the transaction log during large operations, you need to perform that operation in smaller batches.

Here is an example:


Declare @rowsAffected int = 1
, @batchSize int = 200000;

Set @rowsAffected = 1;
While @rowsAffected > 0
Begin
Delete Top(@batchSize)
From dbo.{Your Table Here}
Where {filters};

Set @rowsAffected = @@rowcount;

Checkpoint; -- If in SIMPLE recovery model
Backup Log ... -- If in FULL/BULK recovery models
End;


If the database is in SIMPLE recovery - issue a CHECKPOINT following each delete. If the database is in FULL/BULK recovery models perform a transaction log backup.

Set the batch size to something reasonable for your system.
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-11-11 : 23:00:02
Hi Jeff,

I tried implementing the logic provided by you but when I printed the value for rowsAffected and @@rowcount but not sure why it gave me different values for both ;after delete. the Value of rowcount is correct but rowsaffected was zero even though I have used Set @rowsAffected = @@rowcount; after delete processing.
I am investgating on the same.

Can you please suggest that everytime I need to use the set command to assign the value to rowsaffected or is it a pointer to the row count.

Go to Top of Page

jeffw8713
Aged Yak Warrior

819 Posts

Posted - 2014-11-12 : 13:57:06
The variable @rowsAffected is used to terminate the loop when there are no more records to be deleted. We set @rowsAffected to @@rowcount to get the number of rows affected by the previous statement (the delete).

As soon as there are no rows affected - @@rowcount becomes 0 and the loop will end.
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-11-16 : 23:29:54
Hi Jeff,

Thanks! It's working now

Also, the ID which is being used for archival doesn't have the permission to shrink log and the DBA's have refused to provide the shrinkfile access.

Can someone please suggest an alternative to shrink log file as the records to be deleted are quite high around 1 TB and I fear that the log file may get full.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2014-11-17 : 07:04:55
quote:
Originally posted by Rahul Raj

Can someone please suggest an alternative to shrink log file as the records to be deleted are quite high around 1 TB and I fear that the log file may get full.



Backup the log file regularly during the delete operation, and delete in the rows in "reasonable" sized batches (that will give the backup the opportunity to mark space used by completed transactions in the Log file as being available for reuse).

Our normal log file backup interval is 15 minutes, but during database maintenance (purge of stale records, and index rebuild) we increase the backup interval to every 2 minutes. An alterative would be to trigger a log backup after each iteration of a batch-deletion loop.
Go to Top of Page

jeffw8713
Aged Yak Warrior

819 Posts

Posted - 2014-11-18 : 13:55:54
You can also add a wait in the batch loop - to give the system time to clear the log records (in simple recovery) or for the log backup to be performed. If you use a reasonable batch size for the delete - the transaction log won't grow at all and there will be no reason to perform a shrink.
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-11-20 : 20:28:40
HI All,

Thanks for your valuable inputs.

I am proceeding with deletion in smaller batches and issuing commit frequently to enable the log file back-up. The recovery mode is SIMPLE thereby I understand that transaction log file back-up will be automatically done.
Any other suggestions are welcome. Thanks for your help!!
Go to Top of Page

gbritton
Master Smack Fu Yak Hacker

2780 Posts

Posted - 2014-11-21 : 09:20:54
transaction log backups are never done automatically
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-11-23 : 07:30:58
Hi gbritton,

I will be deleting in smaller batches and issuing commit frequently. The recovery mode is SIMPLE.

Can you please suggest how to avoid the transaction file space error.
Go to Top of Page

James K
Master Smack Fu Yak Hacker

3873 Posts

Posted - 2014-11-24 : 08:25:20
Do what JeffW suggested - i.e., insert a CHECKPOINT statement between each batch of delete. That is all you have to do.
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-12-03 : 23:31:51
Hi All,

Thanks all for your valuable suggestions. I am planning to implement the below logic :

Declare @rowsAffected int = 1
, @batchSize int = 30
,@perform_checkpoint = 100000

create temp table #XYZ(Cust-id)

insert into temp table (all eligible Cust-ID records)

Set @finish = 1;
While @finish > 0
Begin

select top @batchsize from #XYZ

set @finish = @rowsAffected; (or else will do select count(*) from #XYZ)

Delete From dbo.Table1
Where cust-id in (select cust-id from #xyz);

set @rowsAffected = @rowsAffected + @@rowcount;

IF @rowsAffected > @perform_checkpoint ( Not sure if this syntax is correct)

Checkpoint; ( It is set to Simple recovery mode )

set @rowsaffected = 0
END

Delete from dbo.table2
Where cust-id in (select cust-id from #xyz);

set @rowsAffected = @rowsAffected + @@rowcount;

IF @rowsAffected > @perform_checkpoint ( Not sure if this syntax is correct)

Checkpoint; ( It is set to Simple recovery mode )

set @rowsaffected = 0
END
.
.
.
.
.
Delete from dbo.table3
Where cust-id in (select cust-id from #xyz);


Set @rowsAffected = @@rowcount;

Checkpoint; -- If in SIMPLE recovery model
--Backup Log(Not required in simple recovery mode)
End;

There are around 50 tables from which the records needs to be deleted based on cust-id.

Please suggest if this is fine.

Thanks for your help!
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-12-04 : 21:15:52
Hi All,

I have slightly modified the above code. Please find the psedo code below and suggest if this works :

Declare @rowsAffected int = 1
, @batchSize int = 100
,@perform_checkpoint = 100000

update <custtable>
set delete_flag =Y
where date < '2014-01-01'

set @to_be_done = @@rowcount

create temp table temptable(Cust-id)

while @to_be_done > 0 ( loop till all the eligible records are deleted)

delete from <temptable> /* this is to make the temp table empty for the next loop */


insert into temp table /*all eligible Cust-ID records*/
select top(@batchsize) /* to delete a limited number of records (small batches)*/
from cust_table
where delete_flag = Y

Begin

declare cursor for
select cust-id
from <temptable>

open cursor
fetch next cust-id

while @@fetch_status = 0 /* second while loop inside a while loop ? */

begin
begin try

Delete X1 From dbo.Table1
Where cust-id in (select cust-id from <temptable>); /* these tables can sometimes contain around 100000 records as well. Should I
issue checkpoint after each delete but what happens in case of rollback */

Delete X2 from dbo.table2
Where cust-id in (select cust-id from temptable);

.
.
.
.
Delete X1 from dbo.table3
Where cust-id in (select cust-id from temptable);

Delete X1 from dbo.table3
Where cust-id in (select cust-id from temptable);


IF TRAN = 0 (Can someone please explain what will the transaction =0 check)
begin
commit transaction
end try

begin catch
if xact_state() <> 0
rollback_trans

close cursor
deallocate cursor
end catch

fetch next from cursor
end
close
deallocate

@to_be_done = @to_be_done - @batchsize


checkpoint; */after each batch of 100 cust-ID (loops) issue checkpoint. The control will go to first while loop (while @to_be_done >
0 ).Not sure if we can have 2 while's or pls suggest if there's any alternative solution*/

END

Thanks for your help
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-12-05 : 22:12:07
Hi All,

I have gone through various links online and found the below interesting points(By Paul S Randel) :

" A log record is no longer needed in the transaction log if all of the following are true:

The transaction of which it is part has committed.
The database pages it changed have all been written to disk by a checkpoint.
The log record is not needed for a backup (full, differential, or log).
The log record is not needed for any feature that reads the log (such as database mirroring or replication)."

All the above conditions are true for my case as the transaction is commited,checkpoint issued,the recovery mode is SIMPLE and mirroring and replication is not enabled(I think, but still would need to check on the mirroring and replication part - Any idea how it can be checked).

I have also searched all the SP in the all the Databases in my application but could'nt find any of them issuing a checkpoint statement.I believe issuing checkpoint is mandatory for the VLF to be marked as truncated but if none of them is issuing the checkpoint statement then how is the transaction log is managed. Please advice.

Thanks for your help!
Go to Top of Page

Rahul Raj
Starting Member

41 Posts

Posted - 2014-12-07 : 21:20:41
Hi,

Can someone please help on this.

Thanks!
Go to Top of Page
   

- Advertisement -