Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
kpsreenath
Yak Posting Veteran
52 Posts |
Posted - 2006-02-12 : 15:13:17
|
HelloI need to review the current maintenance plans for all the production servers. I am putting down the details on one of the big database in the server.1) High activity 24X7 supportIts in full recovery mode with daily fullbackup to the tape(Brightstor) and hourly transactional log backupDatafile size - 50GBLog file size ~ 2gbWhat will be the general approach in building the maintenance plan for a database like this.To be more specific, how often should I run the checkDb, reindex, update statistics.Is there any other things which i need to be aware of other than reindexing, updating statistics, checkdb etc.Thanks in advanceSreenath |
|
Kristen
Test
22859 Posts |
Posted - 2006-02-13 : 10:57:10
|
There are a few articles linked from:http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55210&SearchTerms=MaintenanceI would NOT backup direct to tape (if that is what you are doing), I would backup to Disk and then from Disk to Tape.For a "High activity 24X7" database [unless its high activity READING only!] then I would reduce the TLog backups to every 10~15 minutes.I think you should be looking at using DBCC DEFRAG once a day (ideally you have a "quiet-ish period") on tables where the fragmentation is getting a bit wild. We use REINDEX on very small tables ONLY, DEFRAG on the bigger ones.You need to be careful about how you delete "stale" data. We have delete in excess of 8,000,000 rows a day. That is done in a loop that deletes them in small chunks and then waits a bit before doing the next chunk - it auto-adjusts the number of rows depending on how busy the server is.We do a full UPDATE STATS every nigh using a scan of the full table, not just a sample (might not be realistic for 50GB though).Once a day we automatically RECOMPILE sprocs where their average performance [elapsed time] for the day has fallen by more than 50% from "long term average" execution time.You should restore all backups to another machine and run CHECKDB on it - that proves that your backups are in good enough shape to be capable of being restored! You should do this starting from your tape backups to prove that they can be restored too, but "log shipping" from your live server will give you a standby server, which is probably sensible for business continuity, so then the backup-tape-restore-test is only needed as a fire-drill for full-blown disaster recovery.I think you should physically defrag the databases physical files periodically (i.e. when you have a scheduled maintenance window to have the database off line for a while). The sysinterals folk have a utility for single-file defrag.You should watch how much the LDf/MDF are getting extended, and set the amount that they extend to sensible amounts, and preferably autoMagically extend the files when they are close to their current limit during the "quiet time" - files extending, and some statistics recalculating, during a busy period puts an enormous strain on the systems [i.e. resources being unavailable for long enough that things start timing out] in my personal experience.You should have things tracking the size of files, and predicting when systems will be full (plus alerting sudden filesize changes!)Kristen |
 |
|
kpsreenath
Yak Posting Veteran
52 Posts |
Posted - 2006-02-13 : 16:31:16
|
Kristen, Thanks a lot...this is really helpful..i appreciate it...again..thanks a lotSreenath |
 |
|
|
|
|
|
|