Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 transaction logs are huge!!!

Author  Topic 

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-12 : 17:36:23
my transaction logs for one day goes upto 74GIGS total... what is wrong?

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2006-12-12 : 17:43:53
You are not doing enough log backups.


Peter Larsson
Helsingborg, Sweden
Go to Top of Page

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-12 : 17:49:36
im doing a transaction every 15 mins and full every day

i heard there is this funtion

truncate log AgriMrpPrd with truncate_only


but dont know how to use it yet
Go to Top of Page

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-12 : 18:09:04
or this:

use [pubs] DBCC SHRINKFILE (N'pubs_log', 1)
Go to Top of Page

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-12 : 18:39:33
ne1?
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2006-12-12 : 18:41:33
You don't want to truncate the log just because the file size is large. Why do you think the file size shouldn't be that big? How big is your database?

Tara Kizer
Go to Top of Page

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-12 : 20:47:46
database is about 100GIG full back up , I woul dlike to maximuse transaction log size to 20GIG
Go to Top of Page

vaddi
Posting Yak Master

145 Posts

Posted - 2006-12-13 : 00:22:57
May be you can use this sql :

BACKUP LOG [Database Name] TO DISK = N'D:\MSSQL\BACKUP\database name\datbasename_log.bak' WITH NOINIT , NOUNLOAD , NAME = N'DBName hourly log backup', SKIP , STATS = 10, NOFORMAT , NO_TRUNCATE

Hope this helps

Thanks
Go to Top of Page

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-13 : 00:31:29
does that create the log schedule ? or is it a one off performance to create a single transaction log
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2006-12-13 : 11:53:43
quote:
Originally posted by rnbguy

database is about 100GIG full back up , I woul dlike to maximuse transaction log size to 20GIG



Then why don't you shrink the file? DBCC SHRINKFILE...

Tara Kizer
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2006-12-13 : 11:54:37
quote:
Originally posted by rnbguy

does that create the log schedule ? or is it a one off performance to create a single transaction log



It's a one-off.

Tara Kizer
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-12-13 : 14:10:01
quote:
Originally posted by rnbguy

im doing a transaction every 15 mins and full every day

i heard there is this funtion

truncate log AgriMrpPrd with truncate_only


but dont know how to use it yet



If you are doing transaction log backups every 15 mins 24x7 and that is still happening, then you probably have some process running that inserts or updates a large amount of data in one transaction. If might also be from a re-index operation.

If you have a single large transaction, the transaction log file will continue to grow until the transaction completes.

If you want to prevent this from happening, you will need to review your processes and see if they can be revised.







CODO ERGO SUM
Go to Top of Page

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-13 : 17:53:54
thanx guys, problems fixed, and revising processes now..
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2006-12-14 : 05:52:36
Care to share the solution?


Peter Larsson
Helsingborg, Sweden
Go to Top of Page

vaddi
Posting Yak Master

145 Posts

Posted - 2006-12-14 : 10:34:41
I have got a doubt if I need to take the system DB's backup also every 30 minutes. I take the prod DB's log backup every 30 minutes. should I do the same for Master , MSDB and Model DB's

Thanks
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-12-14 : 10:54:36
quote:
Originally posted by vaddi

I have got a doubt if I need to take the system DB's backup also every 30 minutes. I take the prod DB's log backup every 30 minutes. should I do the same for Master , MSDB and Model DB's

Thanks



No, you can't.

When you have questions, you should post a new topic, instead of posting on an unrelated topic.




CODO ERGO SUM
Go to Top of Page

rnbguy
Constraint Violating Yak Guru

293 Posts

Posted - 2006-12-14 : 18:03:11
we basically decided to do this process every morning:

Set database recovery model to Simple
shrink log files
Set model to full
do full backup
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-12-14 : 22:38:19
quote:
Originally posted by rnbguy

we basically decided to do this process every morning:

Set database recovery model to Simple
shrink log files
Set model to full
do full backup


That isn’t really a solution. You are breaking your transaction log chain and invalidating any possible recovery to a point in time or a recovery forward from a prior days backup. Also, you are shrinking the transaction log file everyday, so you are probably getting a lot of physical fragmentation.

You should identify what is causing the transaction log to grow so large and see what you might be able to do about that before resorting to such a radical hack.







CODO ERGO SUM
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-12-15 : 02:39:23
"so you are probably getting a lot of physical fragmentation"

IME you will definitely be getting physical file fragmentation. You are also losing a lot of time each day re-growing the file. if the file needs to be nGB to get you through the day then leave it at that size - worry about shrinking it when it grows exceptionally over that size.

Just to emphasize the roll-forward point MVJ made:

Say you want to recover to yesterday lunchtime. You restore yesterday morning's full backup, and then plan to restore the TLog backups up to lunch time. However, lets assume that there is an error restoring yesterday mornings FULL backup. Then you can restore the previous morning's full backup and all the following TLog backups. So you have two "outs".

If you break the backup chain (by setting the DB to simple, or truncating the logs without backing them up, etc., then you can no longer do that.

I have had more than one incidence where a disk error trashed the database, but wasn't noticed for a couple of days. The Full backups were hosed, but the TLog backups were fine. So we recovered the Full backup from before the issue, then all the TLog backups, and didn't lose any data at all. Couldn't have done that with a broken backup chain.

Kristen
Go to Top of Page

TonyTheDBA
Posting Yak Master

121 Posts

Posted - 2006-12-15 : 05:24:25
We had a nice issue when I arrived in post, Database driven website that I think recorded booking information and room availability for the hotels and guest houses in the town. Bookings could be made via the web, but also direct with the hotels etc so various sources were required to keep the availability up to date. Problem was the performance of the database server was slow, the website was slow, and other stats db's on the server were slow, diskspace also was a problem. . . . . Transaction log was HUGE (2GB DB 60GB Log!) . . . . Database was in Simple mode and backed up weekly!! (Yes we had no DBA, and this was the Vendors implementation, and I think they didn't have anyone who understood SQL Server either), Changed the db mode, backup routine including an hourly log backup (business was happy with that!), Shrunk the log. and set a MAX Growth of 2GB. Hour later log was full, removed max size, log grew to 10GB! Puzzled I fired up Profiler, Massive number of inserts taking place. Kept watching and determined that tehy were deleting all the record in the availability table, then then inserting all the records from a text file which is where the applciation stored all the changes to availability . . . all 20 million rows! Hourly! I had a little conversation with the vendors using very short words and explained that while it appeared to work at their site with a test DB of 5 hotels, it didn't at ours. Our tourism department was instructed NOT to pay any support invoices until they changed the application.

"Good engineering is the diffrence between code running in 8 minutes or eight hours. It affects real people in real ways. It's not a "matter of "opinion any more than a bird taking flight is a "matter of opinion."" - H. W. Kenton

--
Regards
Tony The DBA
Go to Top of Page

vaddi
Posting Yak Master

145 Posts

Posted - 2006-12-15 : 10:51:41
hey , even I have the issue of the Log file growing large. Everynight we are doing the re-organize of index which is making the log grow.

We are very short on the disc space , so I am forced to shrink the log file every night.

I take the log backups every hour and delete them the next day.
Even the hourly backups go upto 1000 KB.

Is there any way , I can improve the perfomance with out shrinking the log file daily.

THnaks
Go to Top of Page
    Next Page

- Advertisement -