Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Simple Revovery yet 26 Gig log file !!!

Author  Topic 

pjsz
Starting Member

5 Posts

Posted - 2007-09-06 : 11:51:05
Hi all. This is my first post. This forum has helped me out greatly many times. I cannot find anything addressing our issue.

Basically, our databases use the Simple recovery model yet the log file grows out of control and i cannot figure out why. Currently a log file is 26 gigs yet the data is only 4 gigs and the recovery mode is simple. I am thinking that Checkpoints are not getting issued as they should but do not know why. WE backup the database once a week currently by default.

thanks

pjsz

Kristen
Test

22859 Posts

Posted - 2007-09-06 : 11:56:06
"Currently a log file is 26 gigs"

Is this the size of the LDF file, or some other measure of log-file-size?

If LDF size the likelihood is that it once-upon-a-time grew to that size (perhaps because the database was in Full recovery model at that time), and has never been shrunk

If you haven't already done so a one-time-shrink, and then see if the file grows again, would be in order. note that there may be performance implications doing this. If the file does grow back again you should plan to defrag the physical file.

See http://sqlteam.com/forums/topic.asp?TOPIC_ID=55210&SearchTerms=Database%20Shrinking,Shrinking,Shrink

Kristen
Go to Top of Page

pjsz
Starting Member

5 Posts

Posted - 2007-09-06 : 14:28:47
Thanks Kristen. Thanks for the quick reply. To answer your question , 26 G is the size of ldf. i used sp_helpfile to see the size of the log. I know the log has been truncated and shrunk since the switch from simple to recovery. It has been steadily growing this week and took 70 seconds to grow 10% to get to the 26 G point. week which is when we noticed the problem.

When you say defrag physical drive , you mean in windows right?, the drive the logs are on? I will check that. Also i will look for any uncommitted transactions. The weird thing is we do nat have "Begin Trans" anywhere in our code which makes this more puzzling to me. Someone else may be running some queries on it though. Our code is old VB6 doing many operations on very un-normalized databases.

Thanks for help and link.

Go to Top of Page

jen
Master Smack Fu Yak Hacker

4110 Posts

Posted - 2007-09-06 : 14:48:37
when you switch from full to simple, you need to do a one time backup to set the change in

it should always free up so you don't have to shrink it all the time

--------------------
keeping it simple...
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2007-09-06 : 15:06:15
well, you could be running really long running transactions that blow the tlog out to 26GB in size. This is usually caused by the maintenance plan optimization job.

what sql server service pack are you running?

also, with your current backup schedule you can lose up to a weeks worth of data. Is that acceptable?



-ec
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-09-06 : 15:08:01
"took 70 seconds to grow 10% to get to the 26 G point. week which is when we noticed the problem"

I recommend that you use fixed expansion sizes, rather than percent, for large databases. Otherwise the time-taken gets proportionately larger too! (Expansion takes considerably longer with SQL 2000 than SQL 2005, and may lead to processes timing out etc.)

"When you say defrag physical drive , you mean in windows right?, the drive the logs are on?"

Yes. You can defrag just the file itself using CONTIG from SYSINTERNALS (I think it can operate with the file open, but we always do it in a maintenance window with the database offline, or SQL Service stopped, AND have a backup just-in-case!)

"i will look for any uncommitted transactions"

If you have shrunk the LDF file, and it has grown back, then I reckon that is the most likely candidate.

There are ways of finding if there are uncommitted transactions, but off hand I can't remember the procedure.

And your earlier comment about CHECKPOINTS would be worth investigating too.

Kristen
Go to Top of Page

pjsz
Starting Member

5 Posts

Posted - 2007-09-06 : 15:32:13
quote:
Originally posted by eyechart

well, you could be running really long running transactions that blow the tlog out to 26GB in size. This is usually caused by the maintenance plan optimization job.

what sql server service pack are you running?

also, with your current backup schedule you can lose up to a weeks worth of data. Is that acceptable?




select @@version; go; gives me
Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)


Ahh, it looks like it is Service pack 2. Is that significant, must check. I am complete newbie here, a programmer playing dba . It is fun .

Re, backup plan, customers are supposed to back it up. It is not their biggest problem in a failure, the db is replicated in real time to backup server (useless in case of corruption.). We should probably be doing incremental backups throughout the day but it is customer's choice.

Thanks
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2007-09-06 : 15:42:11
quote:
Originally posted by pjsz
select @@version; go; gives me
Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)


Ahh, it looks like it is Service pack 2. Is that significant, must check. I am complete newbie here, a programmer playing dba . It is fun .

Re, backup plan, customers are supposed to back it up. It is not their biggest problem in a failure, the db is replicated in real time to backup server (useless in case of corruption.). We should probably be doing incremental backups throughout the day but it is customer's choice.

Thanks



that SP2 is for the OS. We are more interested in the SQL build, 2039 in this case which means SP4 for SQL 2000. I just wanted to see if you were running an old release of SQL (which you aren't).

as Kristen pointed out, this problem is probably due to some long running uncommitted transaction. You can look for the longest running transaction in your database using DBCC OPENTRAN. look that up in BOL or google it for more details.



-ec
Go to Top of Page

pjsz
Starting Member

5 Posts

Posted - 2007-09-06 : 16:02:21
quote:
Originally posted by Kristen

"took 70 seconds to grow 10% to get to the 26 G point. week which is when we noticed the problem"

I recommend that you use fixed expansion sizes, rather than percent, for large databases. Otherwise the time-taken gets proportionately larger too! (Expansion takes considerably longer with SQL 2000 than SQL 2005, and may lead to processes timing out etc.)

"When you say defrag physical drive , you mean in windows right?, the drive the logs are on?"

Yes. You can defrag just the file itself using CONTIG from SYSINTERNALS (I think it can operate with the file open, but we always do it in a maintenance window with the database offline, or SQL Service stopped, AND have a backup just-in-case!)

"i will look for any uncommitted transactions"

If you have shrunk the LDF file, and it has grown back, then I reckon that is the most likely candidate.

There are ways of finding if there are uncommitted transactions, but off hand I can't remember the procedure.

And your earlier comment about CHECKPOINTS would be worth investigating too.

Kristen



"DBCC Opentran" shows the open trans. We have none. Not surprised. I looked for open transactions on an 8 gig log file. It had none.

I would used fixed expansion size when i can find out what the stable size will be. We don't even use transactions. I bet i could witness a checkpoint in a profiler trace.

It is quite the puzzle. I bet it is something simple stupid we did. I checked the recovery interval with "sp_configure 'recovery interval'" and it is set to 0. That is the default and what MS recommends and i believe ends up being equivalent to something under 60 seconds. I might play with that setting some , say set it to 1 and see what happens.

thanks
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-09-07 : 03:26:11
"I would used fixed expansion size when i can find out what the stable size will be"

Just to be sure we're on the same wavelength:

This isn't the max size of the log file, just the amount of each expansion. I'm surprised that your expansion for 10% of 26GB file, i.e. a 2.6GB extension, is only taking 70 seconds. Either the system didn;t have many user queries at the time, or its some nice hardware! IME on busy systems file expansion can easily take ten's of minutes, and cause all sorts of timeouts. So setting it to, say, 500MB, or less, would be prudent. At worst you will have more expansions, of smaller size each, and the downside of that is increased physical fragmentation of the file - which you can combat by defragging it once in a while (which you ought to do anyway ...). e.g. using CONTIG.EXE as I mentioned above

Might be worth tracking how much of the log is actually used through the day, in case the culprit that starts the run-away growth can be found.

I have seen stuff before about dangling transactions that never get cleared (dunno if DBCC OPENTAN shows them up).

DBCC LOGINFO may help to show what is going on

There was an Sproc to force a log to truncate by Andrew Zanevsky - sorry, I don't have a link to it, but Google may find it. It iteratively created transactions to force the log to get to the "next free block", or something like that, such that the last used block could be released, and then it would start reusing the log from the start.

Other than that

BACKUP LOG MyDatabaseName WITH TRUNCATE_ONLY

should clear the log

Kristen
Go to Top of Page

pjsz
Starting Member

5 Posts

Posted - 2007-09-07 : 11:30:03
Thanks Kristen. You are absolutely correct about the growth side. I should set it right now 500 MB to prevent any issues until i resolve this. The servers are Dells bought this year or late last year so they are probably pretty nice.

I will look into Contig.exe and Defragmenting. Thank you.

Another thought occured to me -- could the Database growth cause the trans log to grow very big? I am thinking if the DB is growing and moving pages around then that might cause some heavy transactions to take place. The suspect database that grows about 15 gigs a week from 5 g to 20 G and is cleared out weekly.

cheers
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-09-07 : 15:10:37
"could the Database growth cause the trans log to grow very big?"

Don't think so. Growing the MDF doesn't necessarily imply big LDF files. However, it does imply inserting new data (or updating rows and making them bigger, and in turn they have to be moved to a "larger" free space), which will get logged. But if its one insert at a time that should just cause the same piece of log file to be reused over and over; whereas inserting 1,000,000 rows in a single transaction will be a big piece of Log file!

"I am thinking if the DB is ... moving pages"

The only thing that makes it move pages around is a Reindex or Defrag, that WILL cause a lot of logging, possibly in large transactions. But that should be identifiable as the LDF growing wildly (or filling up rapidly) at specific times, which in turn will be possibly to associated with a scheduled job, or failing that something tangible that you can monitor and catch with SQL Profiler.

Bulk imports or large deletions would be another candidate, but they have basically got to be a) large transactions and b) run over an extended period of time to muck up the checkpointing sufficiently to start causing the sort of problems you are seeing - unless there is some Config setting that's mucked up (as you speculated earlier)

"The suspect database that grows about 15 gigs a week from 5 g to 20 G and is cleared out weekly"

What does "cleared out" entail?

DELETE MegaBigTable WHERE MyDate < @OneWeekAgo

is going to generate a huge transaction . You'd need a better way of tackling that sort of housekeeping - IF you are using a blunt-weapon approach like that . But that still doesn't lead (in normal operating environments) to the LDF not releasing transactions and the LDF file not being re-used.

Some sort of "amount of log used" graph through the day to pinpoint when it starts uncontrolled growth (and whether the used-space reduces by itself, or not) would help.

Kristen
Go to Top of Page

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2007-09-07 : 23:56:25
If db grew 15gb, log will grow as well if loaded data in big batch.
Go to Top of Page

Zoroaster
Aged Yak Warrior

702 Posts

Posted - 2007-09-09 : 20:03:01
I'm a bit confused by the responses, my understanding was that simple recovery mode did not shrink the physical LDF file but did make the logical space available for reuse?



Future guru in the making.
Go to Top of Page

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2007-09-09 : 20:30:54
That's correct unless you enabled db option 'auto shrink'.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-09-10 : 02:59:27
"my understanding was that simple recovery mode did not shrink the physical LDF file but did make the logical space available for reuse?"

Correct, but OP's database log seems to carry on growing and appears not to be reusing the space as expected
Go to Top of Page
   

- Advertisement -