Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 SQL Server not using all tlog files

Author  Topic 

MuadDBA

628 Posts

Posted - 2007-10-01 : 13:45:17
I thought I understood that SQL Server would write to all log files in a round robin or even parallel fashion if there was more than one transactrion log file on the database?

I have a large DB, about 800GB, and I have set up 5 20GB transaction log files. 2 were original with the database, and I added 3 more, and adjust tehm all so they are the same size. I then kicked off some pretty intensive queries (dbcc dbreindex jobs, 4 of them running simultaneously).

The server has 8 hyperthreaded processors and a ton of RAM (16GB), so I think I should be seeing several threads writing to all of the log files at once, or at least in a round robin fashion. Instead, only one log file is getting written to. All files are on the same drive.

Am I mistaken here, or is something out of whack? What should I check?

nr
SQLTeam MVY

12543 Posts

Posted - 2007-10-01 : 13:53:09
How are you checking which files are getting written to?
How much data is being used in the file that is being used?



==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

Haywood
Posting Yak Master

221 Posts

Posted - 2007-10-01 : 14:11:09
Logs are written sequentially. The engine will not take advantage of multiple log files the same way it will for multiple datafiles.
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2007-10-01 : 14:37:40
i would not advise using multiple logfiles. it complicates certain recovery scenarios and as was pointed out, there is no performance advantages to having multiple logfiles (unlike multiple database datafiles).

The only time I add a 2nd logfile is in an emergency if we are running low on space. I always remove the additional logfiles afterward so that we are only running a single logfile.



-ec
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-10-01 : 14:42:00
"and a ton of RAM (16GB)"

Not specifically relevant, but I presume you are using Enterprise version of SQL Server (and I guess you need an Enterprise Version of Windows too)

Kristen
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 14:46:54
quote:
Originally posted by nr

How are you checking which files are getting written to?
How much data is being used in the file that is being used?



I am using fn_virtualfilestats to check the wirtes to the log files.

byteswritten is 538146304 for one file, 114688 for another, and zero for the other three.
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 14:49:08
quote:
Originally posted by eyechart

i would not advise using multiple logfiles. it complicates certain recovery scenarios and as was pointed out, there is no performance advantages to having multiple logfiles (unlike multiple database datafiles).

The only time I add a 2nd logfile is in an emergency if we are running low on space. I always remove the additional logfiles afterward so that we are only running a single logfile.




Certainly an understandable position, however, I am seeing waits for log write operations at times, and was advised at PASS to try to use multiple log files to reduce wait time for log writes.
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 14:53:22
quote:
Originally posted by Kristen

"and a ton of RAM (16GB)"

Not specifically relevant, but I presume you are using Enterprise version of SQL Server (and I guess you need an Enterprise Version of Windows too)

Kristen



Yep.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-10-01 : 15:01:51
"reduce wait time for log writes"

Stupid question time: You got them one a separate spindle already, and RAID10 or somesuch, rather than some poxy RAID5 or the back-of-an-envelope-USB-drive{:D]?

Just for my curiosity, how much are you writing to the TLogs in, say, a busy-hour-long-period?

Kristen
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 15:05:05
Better (I think). I've got them on an EMC SAN. So the one drive is actualy a bunch of hypervolumes (I don't have specific spindle info, though).

I am tracking the changes using fn_virtualfilestats every 5 minutes so I can see how much over an hour period, I'll post back once I have that info.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-10-01 : 15:11:10
You backing up the Tlogs (i.e. FULL Recovery model)? The size of them for a previous 60-elapsed-busy-minutes would give a quickie-answer ...
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 15:16:37
This server is in simple mode :(
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-10-01 : 15:21:48
Ah, OK. Did the guys at PASS know that / were proposing this solution for a SIMPLE recover model?

it seems to me that on a Checkpoint the Logs are cleared and will be reused. Perhaps a completely different scenario to FULL Recovery Model where sequential writing goes on for some time [i.e. until TLog Backup] and thus can perhaps more readily "Span" multiple files.

Just thinking out loud, I have ZERO knowledge in this area.

Kristen
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 15:28:22
Yeah, it could be that you are right, I had a similar thought after I realized this DB is in simple mode. We did not discuss that part (it's always the simple oversights)
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2007-10-01 : 15:36:59
quote:
Originally posted by crazyjoe
Certainly an understandable position, however, I am seeing waits for log write operations at times, and was advised at PASS to try to use multiple log files to reduce wait time for log writes.



I think that is bad advice. I would put the single tlog datafile on RAID 10 disk for more performance.



-ec
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 15:41:14
Thanks EC. I will look into whether or not our SAN guys will configure some RAID10 hypervolumes for us.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-10-01 : 15:46:53
"I would put the single tlog datafile on RAID 10 disk for more performance"

<Sulk!>

I've always wanted an EMC SAN and now I'll have to tell Santa Claus not to bring me one
Go to Top of Page

MuadDBA

628 Posts

Posted - 2007-10-01 : 15:53:40
Beleive me, if we didn't need the SAN for BCP purposes, I'd be pushing ultra hard for some of the PATA or SATA storage boxes I have seen configured. For a fractional cost of a SAN you can get some truly phenomenal performance out of those little drives.
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2007-10-01 : 15:55:30
quote:
Originally posted by Kristen

I've always wanted an EMC SAN and now I'll have to tell Santa Claus not to bring me one



I'm not sure about EMC, but other SAN vendors allow you to configure the LUNs as RAID5 or RAID10 depending upon what you want.



-ec
Go to Top of Page

eyechart
Master Smack Fu Yak Hacker

3575 Posts

Posted - 2007-10-01 : 16:01:27
quote:
Originally posted by crazyjoe

Beleive me, if we didn't need the SAN for BCP purposes, I'd be pushing ultra hard for some of the PATA or SATA storage boxes I have seen configured. For a fractional cost of a SAN you can get some truly phenomenal performance out of those little drives.




You can also use Veritas Storage Foundation as a replacement LVM for win2k3. This would allow you to RAID 10 the LUNs from your SAN for better performance. VSFW also allows for dynamic disk to be used in a cluster, which is extremely useful.



-ec
Go to Top of Page

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2007-10-01 : 22:39:47
Do you put log files in separate array?
Go to Top of Page
    Next Page

- Advertisement -