Author |
Topic |
jason.williams14
Starting Member
10 Posts |
Posted - 2009-02-09 : 20:00:44
|
As a new DBA here for my company, I have been given the task for our SQL Server environment (we are mostly Oracle shop). i am learning tons!So I was going through our servers and databases (12 servers, 45 databases) getting info on each one. (crazy what I came across...lots of work ahead).Anyway, I came across one database that is about 10 gigs total, is in full recovery mode and has a full backup at night and t-log backups every 30 minutes.Doing a 'DBCC loginfo('mydb')Came back with 227 VLF entries.Looking at the results, I scrolled down and saw that record 194 was status 2, in use.I would like to shrink the t-log and try and size it properly. Before I do that, I want to take a backup in case something goes wrong.I just wanted to layout my steps below, make sure I have everything set:1.BACKUP LOG databasename TO devicenameThis should give me a backup I can return to correct? Would it be best to do a full backup first, then a log backup like above?2.DBCC SHRINKFILE(transactionloglogicalfilename, TRUNCATEONLY)3. Size the log ALTER DATABASE databasename MODIFY FILE ( NAME = transactionloglogicalfilename , SIZE = newtotalsize ) That look right? Appreciate the help.Cheers,Jason |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
jason.williams14
Starting Member
10 Posts |
Posted - 2009-02-09 : 20:13:38
|
quote: Originally posted by tkizer Why do you want to shrink it? Are you really that low on disk space that you need to reclaim this tiny amount? Do you realize you are creating a performance issue if you shrink it?How big is the MDF file? How big is the LDF file?Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog
I guess I dont have to shrink it. My only reasoning for doing so was due to the fact that I have 227 VLF entries in the log file, and from what I read, that is around the number when performance issues hit.Also, this is sort of a 'prod/test' DB right now (dont ask, long story..not happy about it) and it is relatively small. But this DB is going to be hit by over 5000 users in our company. I do realize about a performance penalty and I am not planning on shrinking later or often.. My whole thinking was to try and size the transaction log accordingly as well as schedule my log backups. If I can get this done correctly, then say I size it to 500mb and clear out the empty entries in the t-logs, I should be set, in theory. I think this DB's t-log grew for quite sometime before the transaction logs were backed up.I miss quoted earlier. The .mdf file is a little over 1gig. The .ldf file is around 900mb.But like I said, this DB will see a lot of people hitting it later this year. Trying to line everything up before hand.Thanks,Jason |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2009-02-09 : 20:16:50
|
Shrinking the log file shouldn't change what you are seeing in DBCC LOGINFO. It is not abnormal to have an LDF file be about the same size as an MDF file. Typically it'll be about 25% of the size of the MDF, but it just depends on what kind of maintenance jobs you have in place. If there's a job in place to rebuild all indexes every night, then what you've described could be normal.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
 |
|
jason.williams14
Starting Member
10 Posts |
Posted - 2009-02-09 : 20:23:39
|
quote: Originally posted by tkizer Shrinking the log file shouldn't change what you are seeing in DBCC LOGINFO. It is not abnormal to have an LDF file be about the same size as an MDF file. Typically it'll be about 25% of the size of the MDF, but it just depends on what kind of maintenance jobs you have in place. If there's a job in place to rebuild all indexes every night, then what you've described could be normal.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog
Interesting. I did not know that.I was under the impression that excessive VLF entries could be a issue, and the only way to get rid of them is to shrink the logfile AND that sizing the transaction log (instead of just letting it autogrow) is a better practice.I will dig around some more, see what I can find out.Thanks,Jason |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
jason.williams14
Starting Member
10 Posts |
Posted - 2009-02-09 : 20:33:05
|
quote: Originally posted by tkizer You may want to increase the frequency of your tlog backups if you want to manage the size better. We backup ours every 15 minutes. I'm not aware of any performance issues related to having lots of entries in the tlog. Could you provide supporting documentation on that?Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog
I might change ours to 15mins. Ours is currently 30mins.let me see if I can dig it up. I *think* it was by Paul Randal, but dont quote me. :)Cheers,Jason |
 |
|
jason.williams14
Starting Member
10 Posts |
Posted - 2009-02-09 : 20:35:38
|
Here are two I found:[url]http://sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx[/url][url]http://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/07/25/sql-2000-yes-lots-of-vlf-s-are-bad-improve-the-performance-of-your-triggers-and-log-backups-on-2000.aspx[/url]Cheers,Jason |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
jason.williams14
Starting Member
10 Posts |
|
saurabhsrivastava
Posting Yak Master
216 Posts |
Posted - 2009-02-10 : 15:33:37
|
Size your log file appropriately. If you autogrow log file in smaller chunks you may see larger number of VLF's than extending log file in bigger chunks. |
 |
|
jason.williams14
Starting Member
10 Posts |
Posted - 2009-02-10 : 17:15:41
|
quote: Originally posted by saurabhsrivastava Size your log file appropriately. If you autogrow log file in smaller chunks you may see larger number of VLF's than extending log file in bigger chunks.
That is the million dollar question; how to size your tlogs correctly.What are some suggested methods?Thanks,Jas |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
saurabhsrivastava
Posting Yak Master
216 Posts |
Posted - 2009-02-11 : 23:00:27
|
Sizing depends upon what kind of operation are occuring in DB engine. Keep in mind that objective behind using log file is recovery of transactions, you don't need a Tlog once transactions are flushed to data file. You can control the size of a transaction log file by controling/tuning behavior of transactions on server. Ex:1) Some folks reindex whole database no matter all indexes are fragmented or not. By reindexing only necessary indexes will not only reduce overhead but log growth will be less. 2) Update/inserts/deletes in smaller batches. Analyze nature of transactions and tune them. After all kind of tunings, if you observe tlog size is 10GB just set it to 10GB and keep monitoring data/user growth to get future estimate. I hope you got the idea. |
 |
|
|