Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Best Practice on Large Databases

Author  Topic 

rikleo2001
Posting Yak Master

185 Posts

Posted - 2007-10-10 : 17:01:24
Hello Experts,

I need your expert advice.
I am managing a SQL Server 2000 DATABASE which is upto 40 GB now. Here is what I am doing to maintain it.

1. Taking Full Backup Every Night
2. Taking Differential Backup 3 Times during a day (Don't need TLog Backup)
3. Weekly RE-Indexing on all indexes
4. Weekly Index Fregmentation after re-indexing

That's all.

I need your expert advice that is this enough to Successfully manage a large database or is there any further step I need to do make it more robust.

Many Thanks in advance



SKR

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-10-10 : 17:15:30
You do not need to perform an index fragmentation after you have just reindexed. Reindexing takes care of it.

Do you have an update statistics job? We run the reindex on Sundays and then update statistics Monday-Saturday.

40GB is not considered a large database. It's probably in the medium realm.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

rikleo2001
Posting Yak Master

185 Posts

Posted - 2007-10-10 : 17:22:00
Thanks Tara.
I will get rid of fragmentation process.
I don't have a Update status Job.. but my database option is set to Simple Rcovery mode and Auto update Statistics is on, is this enough? or do I need to do it as seperate process?

Many Thanks

SKR
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-10-10 : 17:26:10
Auto update stats isn't enough according to articles that I have read (specifically over at sql-server-performance.com). The recommendation is to also have a job that manually does it.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

rikleo2001
Posting Yak Master

185 Posts

Posted - 2007-10-10 : 17:38:21
Thanks Tara. I read that article now. I will schedule that job soon.

Thank you so much for your help.

SKR
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-10-11 : 00:56:37
"2. Taking Differential Backup 3 Times during a day (Don't need TLog Backup)"

If you don't need a TLog backup why do you need 3 x Diff backups a day?

Sounds like you have some sort of "recovery to point in time" and that you would be much better off with TLog backups every 10 minutes, which should give you much greater chance of recovery-after-disaster.

With a 40GB database you might find that Reindex on your big tables is causing large database files (which will lead to large database file with lots of free space, which in turn will lead to long restore times under SQL 2000). You could look at De-fragmenting large indexes, rather than Re-index.

We Re-index / De-frag every night, not just weekends, but only for tables which are fragmented (using DBCC SHOWCONTIG).

Tara has some good routines in her blog (here at SQL Team)

Kristen
Go to Top of Page

sachinsamuel
Constraint Violating Yak Guru

383 Posts

Posted - 2007-10-11 : 01:30:21
Adding some more points to comments by experts.

1) You need to carefully monitor space on your database server if you have some bottleneck with it. Moreover you need to have buffer space as when you defrag your indexes sql server log file increases. And you might run out of space incase you have less space.

2) Again, as Kristen mentioned, create jobs or change your maintainance plan to take transaction log backup. It will not only help you to do a point in time of recovery but also help the SQL server database log size to be in control.

3) Inspite of defraging entire database, you can only defrag transaction intensive tables. This will save time.

4) I will recommend to take transaction log backup every 10 mins inspite of taking differential backup, thrice a day. This will avoid locks and high server resource usage which might occure because of the 2nd and 3rd differntial backup (as it will have lot of changes to backup) during the bussiness hours.

5) You configure mail/page/sms on your database. So that if the database backup fails, you can attend it asap.

regards
Sachin

Don't sit back because of failure. It will come back to check if you still available. -- Binu
Go to Top of Page
   

- Advertisement -