Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Temporarily avoiding writing into trans log

Author  Topic 

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-09 : 16:11:39
Guys,

Is there a way to temporarily disable logging into the transaction log.

In our system, we perform purging of our database every night, where the purging consists of 2 steps:

1. For each table, insert the data, to be deleted, into a corresponding "purged" table, to remain there for one day only.

2. For each table, delete the unnecessary data (i.e. same data stored in purged tables in step 1)

During these 2 steps, the transaction log grows, and since we perform the transactional log back up, the back up at that time is huge. We are running a bit low on the hard disk space and I'd like to disable logging into the transaction log when these operations are performed.

I really don't care about being able to recover this data.

I thought that one option is to set the database to simple recovery, then perform the purging of the database, and then change back to full.

However, I think that trans log can grow even if recovery model is simple [although you won't be able to retrieve any changes].

So, is there a way to delete a portion of a table [or insert into it] so that no data is written to a transaction log (I know that we can use TRUNCATE if we need to remove whole table without logging)?


Thanks a lot

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-03-09 : 16:25:14
You can not disable logging to the transaction log.

Even TRUNCATE TABLE is logged (but just the page deallocations).

You are correct about SIMPLE recovery model. It still uses the transaction log.

Tara Kizer
Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-09 : 16:49:11
Got it.

Thanks!
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-03-10 : 03:24:42
"[i]I'd like to disable logging into the transaction log when these operations are performed/i]"

This is what we do:

Loop with an "appropriate size" number of deletions per iteration (we check the elapsed time and adjust the number-per-iteration up or down so that we don't swamp the server if it gets busy).

Backup the TLog after each iteration

Or put the Logging Tables in a separate database that is SIMPLE Recovery Model (you still need to loop not to get a single transaction that contains all the deleted records in one piece).

However, don't bother with any of this if your Reindex operation creates a bigger log file!

Another approach is to copy the records-to-keep to a Temp table, DROP the original table, RENAME the Temp table back - you need to deal with the blocking effect of this on a 24/7 system. This assumes that you are only keeping a small percentage of the records. Logging impact will be very small. Sadly we only delete 1/7th of the rows each day - maybe a carousel of tables would be better for us:

Drop Table7
Rename Table6 to Table7
...
Rename Table1 to Table2
Create new Table1

Kristen
Go to Top of Page

beadlesm
Starting Member

5 Posts

Posted - 2007-03-11 : 15:56:05
Agree completely with Kristen - also, with looping, you can put in a very short waitfor statement (300 ms) if necessary between loops. That allows any other queued processes that need to access the table to get in and do their work during the waitfor. If you do it right, long deletions can run throughout the production day and no one even knows it.

Another approach (if the amount to delete is huge and the amount to keep is small) is to bcp out the amount to keep, truncate the table, bcp in. This is more of an after hours approach.

I prefer the looping method, though, because if there is a problem during looping, only a relatively short transaction needs to be rolled back, rather than a long million row transaction that locks the table up until the rollback is completely finished. (Have you ever seen an overnight transaction rollback during the production day? Not pretty. Not even rebooting frees it up.)

For a wide table, I use loops of anywhere between 500 to 2500 rows per loop. For narrow tables, anywhere from 2500 to 5000 or more might be good. Test out your looping with appropriate select statements for just a few loops.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-03-12 : 08:33:19
Hehehe ... we obviously care a lot less about end-user performance because our default is 200,000 rows per loop!

(We also have the WAITFOR to let Queued processes run; should have remembered that :( )

Kristen
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-03-12 : 12:38:27
We do 10,000 rows per loop, occassionally 100,000. Anything beyond that on our system affects other processes. We run these at night where we have very little end user activity, but we do have a ton of back-end activity.

Tara Kizer
Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-12 : 17:12:28
Thanks for all the inputs.

Related question: If I do not use BEGIN TRAN, but just start deletions of some data from a table, is it still considered a transaction? If so, why ever use BEGIN TRAN then? I'm a bit confused.

Thanks
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-03-12 : 17:17:24
It is not considered a transaction if you have multiple statements without BEGIN TRAN.

Tara Kizer
Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-12 : 18:29:47
In that case, how would the transaction log be affected. From what I understood, rather than doing:
BEGIN TRAN
delete 1,000,000 rows
COMMIT

I should do:
WHILE (BLA)
BEGIN TRAN
DELETE 1000
wait for 3 secs
COMMIT

That will minimize locking and also trans log growth (as I'd be able to back up the trans log in between the while iterations).

But what if I don't have BEGIN TRAN at all - i.e. I just perform the deletion as is, what affect does it have on the trans log? Would it be the same as doing:
BEGIN TRAN
DELETE ALL
COMMIT

i.e. I will not have any control over my trans log growth?

Thanks a lot
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-03-12 : 18:41:12
You don't need BEGIN TRAN in your example as you aren't performing multiple DML operations. In your example, you are performing one DELETE statement. No transaction is needed in that case.

There is no point to a BEGIN TRAN if you don't check the value of @@ERROR or do some other sort of checking to determine whether or not to COMMIT TRAN or ROLLBACK TRAN.

Tara Kizer
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-03-12 : 22:23:34
If you do

a) delete 1,000,000 rows

b) DELETE 1000
wait for 3 secs & then loop

then a) will put 1,000,000 rows in the log, b) will put 1,000 rows in the log on each loop iteration. However, b) will allow the log to be "cleared" after each 1,000 rows - so if a TLog backup runs it can then clear those 1,000 rows, and the transactions for each preceding 1,000 rows that are still in the logs. If you are using SIMPLE Recovery Model then they will be cleared "immediately" [depending on when the next checkpoint is].

If you put a BEGIN TRANSACTION ... COMMIT around the code then the transaction cannot be deleted until the COMMIT - no matter whether you use 1,000,000 or 1,000 rows; the 1,000 rows method may cause less blocking of the table.

I take the view that I don't care [in the sense that I don't mind that some data is left behind] if my deletion fails half way through. The remaining data will get deleted next time, or I'll get an alert that the process has bugs in it!

Kristen
Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-13 : 11:13:11
Tara and Kristen: Thanks for the info.

I am, however, still puzzled.

Tara, if I don't use BEGIN TRAN, then my deletion is not considered a transaction, correct? However, the deletion will be written to the trans log file. But since it is not a transaction, I cannot roll back. In that case, what is the use of that data in the transaction log?


Would the following be the best approach (as suggested) in this case? Please advise:

1. Back up the data, to be purged, into a table in non-production environment, sitting in a database with simple recovery mode. This operation should not grow the trans log of the production database where this table is currently sitting.

2. Delete data as such:
WHILE EXISTS(RECORDS TO DELETE)
BEGIN
BEGIN TRAN
DELETE 1000
WAIT 3 SECS
COMMIT

BACKUP TRANS LOG
END


Thank you
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-03-13 : 12:54:14
The data in the transaction log is so that you can restore to a point in time if needed.

Your example still does not need a transaction. It does not have multiple DML statements plus you aren't checking @@ERROR or doing any other checking to determine commit or rollback.

You need to understand the purpose of BEGIN TRAN/COMMIT TRAN/ROLLBACK before using it.

Tara Kizer
Go to Top of Page

mcrowley
Aged Yak Warrior

771 Posts

Posted - 2007-03-13 : 13:19:59
quote:

if I don't use BEGIN TRAN, then my deletion is not considered a transaction, correct? However, the deletion will be written to the trans log file. But since it is not a transaction, I cannot roll back. In that case, what is the use of that data in the transaction log?


It is still an implicit transaction. If someone trips on the power cord halfway through your delete, SQL Server will use the information in the transaction log to reconstruct what should be there.
Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-13 : 14:39:38
Tara, I think I understand it now. Using BEGIN TRAN/COMMIT TRAN/ROLLBACK basically gives me a control over when and whether to commit or rollback (i.e. I can perform checking and commit or rollback, depending on the outcome of my check)

Alternatively, if I perform an implicit transaction [i.e. not using BEGIN TRAN] (as stated by mcrowley), then if I make a mistake, the only way to roll back would be to either stop the query (in case it is still running) or restore the transaction log to the point in time before this incident occurred (in case the query finished running already).

As such, my latest approach would be as stated above, but excluding BEGIN TRAN/COMMIT, unless I specifically decide to perform error checking after the deletion has been performed!

Thanks a lot
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-03-13 : 14:45:01
Correct on all points!

Tara Kizer
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-03-13 : 14:47:48
"I cannot roll back. In that case, what is the use of that data in the transaction log?"

lets say that on the very last row to be deleted SQL Server discovers a FK violation ... or a Trigger raises an error ... or there is a Deadlock ... etc... then SQL Server will rollback the "implicit" transaction - so either all 1,000,000 rows are safely deleted, or NONE of them. Plus a power-cut half way through is also safe. So its still worth having a Log file!

"WAIT 3 SECS"

You don't need to wait that long. Just a few hundred milliseconds should be enough - to let other queries get in from the queue.

"Alternatively, if I perform an implicit transaction [i.e. not using BEGIN TRAN] (as stated by mcrowley), then if I make a mistake, the only way to roll back would be to either stop the query (in case it is still running) or restore the transaction log to the point in time before this incident occurred (in case the query finished running already)."

Correct.

Kristen
Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-13 : 17:22:21
Thanks a lot!
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2007-03-14 : 12:24:59
SELECT * INTO newTable FROM Table WHERE Criteria to keep is met
Rename Table to Old Table
Rename nweTable to table



Brett

8-)

Hint: Want your questions answered fast? Follow the direction in this link
http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx

Add yourself!
http://www.frappr.com/sqlteam



Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2007-03-16 : 16:41:59
Brett,

That would be much more complicated, I think, as I have lots of indexes to add to all those tables - not to mention that it is possibly more time consuming to recreate tables and then add indexes, rather than purge unnecessary stuff (as some tables have tens of millions of records).

Thanks for the advice though!
Go to Top of Page
    Next Page

- Advertisement -