Author |
Topic |
bigtimeadmin
Starting Member
3 Posts |
Posted - 2006-07-25 : 11:10:00
|
We used to use Oracle and have now switched over to SQL Server 2005 for our DB's. We send our logfiles out over the internet for backup purposes, in Oracle we used checkpoints to limit the size of these files to 5 MB each. It worked great. Now we need to find a way to limit the size of the .TRN files and we can't find one. The files are backed up hourly but some of them are over 100 MB, which limits our ability to transmit them. Is there a way to limit the size of these files to say 10 MB ?? We don't want to just backup more regulary because some are small - and we don't want to take a performance hit. Thanks |
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2006-07-25 : 11:35:23
|
Nope.You can see the size of the active portion of the file and decide whether to back up probably.Usually though the files are big due to a single long running transaction and that has to all be in a single backup file.Would zipping them up help?==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
|
|
bigtimeadmin
Starting Member
3 Posts |
Posted - 2006-07-25 : 11:39:54
|
No, the DB is very active and records many small transactions. The files get zipped up before they are transmitted across. Are there any 3rd party utilities anyone knows of that will do this ? I can't belive they don't allow "checkpoint" type activity to limit the size of the .trn files. Can you set the backup to produce the files more often during the day - when it is busiest - and hourly during off hours ? |
|
|
AndrewMurphy
Master Smack Fu Yak Hacker
2916 Posts |
Posted - 2006-07-25 : 12:02:47
|
"I can't belive they don't allow "checkpoint" type activity to limit the size of the .trn files." The checkpoint activity is a BACKUP.What happens in ORACLE after a 'echeckpoint' on the TRN....and if your database then goes wallop before the next BACKUP? How do you restore from the last backup, and apply all o.s transactions? |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2006-07-25 : 12:26:41
|
Maybe you can schedul an sp to do the backup which runs very frequently but only does the backup depending on DBCC SQLPERF ( LOGSPACE )So it won't back up if very little log space is used.It means that you could get a lot of very small files if there was a long running transaction - you could maybe get round that by looking at the transaction start times.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
|
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2006-07-25 : 18:18:16
|
to achieve this with SQL Server you should backup more frequently.As for Oracle, the tlog mechanism is totally different. In oracle you have online redo and offline redo. The size of the redo logs is determined by a configuraiton parameter, and both the offline ad online redo logs are exactly the same size. When your transactions fill up one online redo log, the REDO process moves to the next log and begins filling it up. Usually you will have multiple online redo logs in a group (10 or 20) so that it can advance to the next log as the previous one is filled. Becasuse the online redo logs in a circular fashion you have to make sure that the old redo logs are archived so that they can be re-used. After advancing to the next log, the archiver process archives the last redo log to the offline redo area. After that log is archived a bit is set telling oracle that the contents of the redo log have been archived and that it can be overwritten when the online redo process loops back around.In oracle you need to make sure that you have adequate space in the offline redo area to handle your workload. You also need to make sure that you have enough online redo in the online group to handle potential interruptions in the archiving process.Now, that was probably way more information than anyone here needs to know about oracle redo logs.-ec |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2006-07-25 : 18:49:17
|
There is really not much performance hit from frequent transaxtion log backups in SQL Server. Just schedule transaction log backups more often.CODO ERGO SUM |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-07-26 : 07:09:52
|
Although I'm bound to say that if you could configure SQL Server to generate TLog backups every, say, 10MB that would stop a lot of the "How do I shrink my LDF file" questions ...Kristen |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2006-07-26 : 07:25:50
|
>> Although I'm bound to say that if you could configure SQL Server to generate TLog backups every, say, 10MB that would stop a lot of the "How do I shrink my LDF file" questions ...Nope - still would clear anything past the earliest open transaction.Would mean a big change to the architecture to get round that.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-07-26 : 08:01:00
|
Indeedie, I was just thinking it was a nice alternative methodology to "Every 10 minutes".I wonder if clients would would be more comfortable with 10MB being a better restorable-unit-of-work than 10Minutes. Probably needs a combination of the two - after 10 minutes they are string to forget exactly what they did, whereas on the day when they get 50 Temps in to bash some data in 10Minutes might be quite expensive to repeat! and of course TLog backups every 10 minutes during low-usage periods means a lot of files when there is a need to restore.Kristen |
|
|
bigtimeadmin
Starting Member
3 Posts |
Posted - 2006-07-26 : 13:45:15
|
Kristen is there a way to to backup the files every 15 mintes from 9-5 and then hourly after that ? or is it 1 parameter for all ? |
|
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2006-07-26 : 14:40:02
|
quote: Originally posted by bigtimeadmin Kristen is there a way to to backup the files every 15 mintes from 9-5 and then hourly after that ? or is it 1 parameter for all ?
you could setup two separate schedules to accomplish this if you wanted. But i wouldn't even bother. There is really no problem with backing up the tlog that frequently (as MVJ stated earlier).-ec |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-07-27 : 05:09:31
|
Not entirely sure I agree with you EC.When you have to restore I think the physical number of files is an issue that is worth considering. Savvy DBAs will have stuff that will just script the whole recovery deal based on MSDB data, or DIR of filenames, or something similar.Others will have 50% of the scripting stuff and need some manual editing.The rest will be ticking checkboxes in Enterprise Manager ...... and the remainder (using my patent-applied-for non-100% arithmetic!) will be having to specify filenames to the data centre for tape recovery before getting to the stage of being able to do one of the previous steps.The more the files the more the manual effort during a disaster recovery I reckon.If you are NOT physically moving the files OFF the server as soon as they are created little-and-very-often is very little better than once-in-a-while: o If the server is toast both strategies are the same. o If the restore is to just-before a user accident (DELETE * FROM ...) then you can make one-final TLog backup before then doing a restore to point-in-time. o If the restore is because of a DBCC CHECKDB error you may still be able to make the final TLog backup, but the risk of not being able to is considerable.A more frequent TLog backup will have the positive benefit of keeping the LDF file smaller - particularly during Index Defrag/rebuilds - a mute point for people with 9-to-5-only scheduling of their TLog backups, and index rebuilds during the night!Either way, I reckon the TLog backup frequency needs to be considered using a right-sizing approach.Kristen |
|
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2006-07-27 : 09:26:46
|
quote: Originally posted by KristenWhen you have to restore I think the physical number of files is an issue that is worth considering. Savvy DBAs will have stuff that will just script the whole recovery deal based on MSDB data, or DIR of filenames, or something similar.
that is why you need to throw in differential backups now and again. This is also one reason why maintenance plans are evil and you should always roll your own backup script - the maintplan wizard doesn't support differentials.-ec |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-07-27 : 10:01:38
|
Hmmm ... didn't really think of that - which is a bit pathetic as we DO differentials.Old age is a terrible thing, eh?!Kristen |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-07-27 : 10:03:56
|
Further thought:In the ABSENCE of Diff Backups the TIME to restore the TLog backups can be very significant. The server has basically got to report all the work the users did.So, have I got this right? If the SQL Box is running at an average 50% CPU then it will take 5 minutes to restore a TLog backup which was generated over a 10 minute timespan?Assuming a more modest 10% utilisation it will take 2.4 hours to restore 24 hours worth of TLog backups ...Kristen |
|
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2006-07-27 : 10:39:24
|
quote: Originally posted by Kristen Further thought:In the ABSENCE of Diff Backups the TIME to restore the TLog backups can be very significant. The server has basically got to report all the work the users did.So, have I got this right? If the SQL Box is running at an average 50% CPU then it will take 5 minutes to restore a TLog backup which was generated over a 10 minute timespan?Assuming a more modest 10% utilisation it will take 2.4 hours to restore 24 hours worth of TLog backups ...Kristen
it doesn't really matter IMHO. If you have 12 hours of transactions to restore, it will take the same amount of time to restore regardless of the number of individual transaction log backup files you are processing - unless you have a diff.Also, more frequent tlog backups reduces the amount of data you can potentially lose in case of a catastrophic failure. If you only take tlog backups every 4 hours, you can lose up to 4 hours of data. If your tlog datafile is corrupt and you are unable to complete that last incremental backup, you will not be able to perform a point-in-time recovery into that last 4 hour window.so, I usually go with fairly frequent tlog backups, interspersed with diffs to make recovery simpler (and faster).-ec |
|
|
Kristen
Test
22859 Posts |
Posted - 2006-07-27 : 11:03:10
|
Sorry, I think we are saying the same thing: I was alluding to the need for fairly frequent DIFF backups on machines which are under heavy CPU load, otherwise the recovery time from Full backup + TLog backups alone is likely to be unacceptable.Kristen |
|
|
|