Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Log Files

Author  Topic 

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-03 : 16:42:23
Our transactions for our log sometimes grew out of control...
Should i put the database into Full mode and do backup of log every 10 minutes.

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-07-03 : 16:45:49
It depends. What event is causing the log to grow out of control? What recovery model are using now? Do you require the ability to restore to a point in time for data recovery?

Tara Kizer
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-03 : 18:06:30
SIMPLE mode right now.
But sometimes the log grew out of control...cost me 4 hours down today.

So i was thinking of doing full mode...and the backups.

I cannot do shrink then ...

I will be investingating what caused it to grew...
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2007-07-03 : 18:37:18
did you do a large bulk insert without specifying the BATCHSIZE? that can grow the log to an enormous size even in simple recovery mode.


elsasoft.org
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-07-03 : 19:10:56
Switching to FULL recovery model will not fix your problem. Since you are using SIMPLE and the log grew quite a bit, you've got some process that is doing a large transaction. Backing up the transaction log only allows the transactions to be completed to be backed up. This is the same with SIMPLE, it's just that the tlog clears out once the transaction completes.

So you need to find out what caused the problem and rewrite it to do it in smaller batches.

Tara Kizer
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-03 : 19:48:20
Thanks best not to go changing the databases to FULL.

I will investigate what was happening and see if i can find out

Some one did do an import and it apparently did not finish it was from an excel doc to the application.

They then ran it again and it grew to 10 gig so it must be the file..It was the wrong format but application hanged on to it.

I see what else i can find out.

I suppose i need to get a log analyzer tool.
Go to Top of Page

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2007-07-04 : 00:02:05
You just need to check open transaction with 'dbcc opentran (db_name)', kill that dead transaction then run 'backup log db_name with trancate_only'.
Go to Top of Page

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-04 : 08:02:49
You just need to check open transaction with 'dbcc opentran (db_name)', kill that dead transaction then run 'backup log db_name with trancate_only'.

Just to clarify for next time, i have the simple mode on databases
so run dbcc opentran kill them...then
do the backup log db_name with truncate_only.

Go to Top of Page

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-04 : 08:04:17
Is there a way to right a script to check if the LOG for the database is over 15 gig then do the shrink command or would i do the
backup log with command backup log db_name with truncate_only.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-07-04 : 08:55:56
Not really a good idea to shrink the log, it costs a lot of resources to grow it back.

OK, so if a one-time transaction has caused the problem then it might be worth doing a one-time shrink, but if you schedule it and it runs every day (say) then you do actually need 15GB of space, and better not to keep fragmenting the files by Shrink+Grow

Setting up something that alerts you to the fact that the log has grown bigger than you would like is probably a good idea, then you can decide to find out what caused it or, if it happens often, to increase the size before you get the alert messages!

Kristen
Go to Top of Page

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-04 : 09:56:44
I have just reproduced the problem its one of our erp applications when they bring in a file that was wrong format it made the log grew.

I just did it now and it grews 8 gigs in 5 minutes.....until i kill off the process in sql.

So now im left with the LOG file at 10 gig was 2 before started...

Database Mode is SIMPLE

So i ran BACKUP LOG CP5TEST WITH TRUNCATE_ONLY but it remains at
10 gig.

So am i only left with doing shrink.


Thanks
Go to Top of Page

b.veenings
Yak Posting Veteran

96 Posts

Posted - 2007-07-04 : 10:31:10
if you have enough space on the mdf you can use the following:

use database [yourdatabase]
go
checkpoint.
go
//the data in the log is written to the physical db.
//now you can shrink the log file with
dbcc shrinkfile (logical log name, [preferred size in mb]
go

it is possible that data in the log file is not fully written to the mdf because it also contains select statements, loops etc.

cheers.
Benjamin

Need an SQL consultant?
check www.veeningsengineering.nl
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2007-07-04 : 10:41:03
BACKUP LOG CP5TEST WITH TRUNCATE_ONLY will not make the file smaller - it only resets the pointer to the beginning of the file.


elsasoft.org
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-07-04 : 11:45:15
You should reuse the log space, not shrink it. Its just going to grow again, and take resources and fragment the file.

Kristen
Go to Top of Page

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-04 : 12:03:25
Thanks all for your help...
Had no space so had to shrink...

I remember the checkpoint part next time.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-07-04 : 12:28:41
"Had no space so had to shrink..."

Yup, I guessed that, but it looks like its going to happen again before you have got the cause fixed, so it would be better to wait until you get it fixed I reckon ...

Kristen
Go to Top of Page

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2007-07-04 : 17:22:15
You can only shrink it if there are free space in log file, backup log will clean up committed transactions and free up space.
Go to Top of Page

TRACEYSQL
Aged Yak Warrior

594 Posts

Posted - 2007-07-05 : 08:11:15
Just to put this on in one step for next time.
1. DBCC OPENTRANS ('DBNAME')
2. Kill of processes
3. use database [yourdatabase]
go
checkpoint.
go
//the data in the log is written to the physical db.
//now you can shrink the log file with
4. dbcc shrinkfile (logical log name, [preferred size in mb]
go
(How do i get the logical name by using the EM and see Transaction_Log is the name ...database name is DELTEKCP.

Hows that
Go to Top of Page

b.veenings
Yak Posting Veteran

96 Posts

Posted - 2007-07-05 : 09:51:14
Choose properties from you database and look at the transactionlog, on the left you can see the name, thats the logical file nam.
good luck.

Need an SQL consultant?
check www.veeningsengineering.nl
Go to Top of Page

b.veenings
Yak Posting Veteran

96 Posts

Posted - 2007-07-05 : 09:59:54
to prevent this problem in the future use the following:

go to the properties from you db --> options tab --> set your database on full recovery and set the auto shrink to on.

every time a auto checkpoint has been executed the log file shrinks automaticaly.

just to prevent performance problems do a rebuild of your indexes or dbcc indexdefrag [indexname]
that would help.

Need an SQL consultant?
check www.veeningsengineering.nl
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-07-05 : 10:33:31
"set your database on full recovery"

Won't make any difference to the growth of the Log file when large single transaction, like the OP has described, occurs. What will happen is that the log will grow until the next log backup is made, rather than just until the next Checkpoint in a database using Simple Recovery Model. I have no idea how AutoShrink will interact with this, but there is a risk that it will break the Tlog backup chain making recovery of successive Tlog backups impossible, which would mean that the OP might as well stick with Simple Recovery Model!

"set the auto shrink to on"

Is a really bad idea because of the fragmentation it will cause to the database files, and the resources that will be consumed re-extending the log file frequently.

"dbcc indexdefrag [indexname]"

On its own won't help much unless the statistics are also updated

Kristen
Go to Top of Page
    Next Page

- Advertisement -