Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Maintenance, indexes, fragmentation & file groups

Author  Topic 

kghammond
Starting Member

4 Posts

Posted - 2006-02-22 : 11:06:23
I am trying to get my head around many of the issues related to maintaining some of our larger databases.

Our largest database is roughly 10 GB. Our largest table in the database has 15,783,725 rows with a size of 9.8 GB and an index size of 2.8 GB according to the taskpad view. It has one clustered index and 13 indexes. The clustered index has a prefix of PK_*** Four of the indexes have a prefix of IX_*** and the remaining nine indexes have a prefix of _WA_Sys_***.

I have been reading a lot about reindexing, checkdb, shrinkdb, etc.

First question, my current rough idea for our new maintenance plan looks like this:
Daily:
1) checkdb
2) backup db
3) backup tlog (tlog backups will run throughout the day as well)

Weekly:
1) checkdb
2) backupdb
3) shrinkfile db
4) backup tlog
5) shrinkfile tlog
6) file system defrag
7) reindex (not an indexdefrag)

I keep reading that shrinking the database files is wasted I/O but, we are renting SAN space, so we need to be as efficient as possible with disk space. We plan to shrink the db before the tlog backup so that we can reclaim the tlog space created during shrinking the db.

I have read that putting your indexes into separate filegroups can reduce fragmentation and thus improve performance. Given the size of our indexes, this seems to make sense to me. Should I move all indexes, clustered and nonclustered to the separate filegroup?

Also, someone previous to me had created a second db file for this database, but they are both in the same file group. Should I leave that as is, or do I need to do something about the second file?

Is there anything I am missing? I think I have caught all the main points for performing regular maintenance. I have already collected, created and modified a set of stored procs to do most of these tasks, so it will just be a matter of scripting a few jobs.

Thank you,
Kevin

Kristen
Test

22859 Posts

Posted - 2006-02-22 : 13:17:49
3) shrinkfile db
5) shrinkfile tlog

No no no ... Please not!

7) reindex (not an indexdefrag)

Don't see any point to that, DEFRAG will probably do just fine, REBINDEX is going to move the whole index around the MDF file, and probably make life different, if not worse!

I would defrag every night (assuming you have a quiet period), I doubt it will take very long if you restrict it to only tables which have, say, >20% fragmentation

"I keep reading that shrinking the database files is wasted I/O"

Its not wasted I/O as such but the truly massive cost of re-acquiring the space, and will most probably fragment the files too (if you need more details of the real-world impact this has pls ask). If you do some massive one-time deletion of stale data then fine. Shrinking a 5GB log on Sunday backup to, say, 1GB and by Monday evening its back at 4.5GB then there is no point. If the system needs 5GB log space to function then let it have it!

I don't know enough to answer your filegroup questions ...

"Is there anything I am missing?"

Whilst you are taking care of performance etc. with housekeeping it would be good to double check that the hardware is sufficient:

Does your hardware already have at least 3 disk channels (Data, TLogs and "Other"), TLogs configured for Mirrored disks, and LOTS of drives on the RAID space for Data (I can never remember what the magic numbers are for the types of the RAID - I'm a software jock, not a hardware junkie!)

A decent amount of RAM would be a good idea too.

Kristen
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2006-02-22 : 13:46:47
Kristen has covered most of what I'd have posted, but I just want to re-emphasize what a waste shrinking is. Never shrink your database files unless you know for certain that you no longer need that space. There is a huge performance penalty if you shrink the file and the system now needs extra space. It takes quite a bit of time to expand the file. Plus the shrink will cause performance problems too while it is running. And never use the autoshrink option, but you weren't mentioning that anyway. I don't even have that option turned on in development.

Here's a great MS article on why for a database size such as yours that you can get away with DBCC INDEXDEFRAG and not use DBCC DBREINDEX:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx

ANd yes putting your indexes onto its own filegroup is a good idea. We have done this for one of our databases. Probably should do it for the others as well.

Here are my database maintenance routines if you don't already have a stored procedure for one of the tasks:
http://weblogs.sqlteam.com/tarad/archive/2004/07/02/1705.aspx

Tara Kizer
aka tduggan
Go to Top of Page

kghammond
Starting Member

4 Posts

Posted - 2006-02-22 : 14:41:36
OK, I have been sold on the no shrinking. We were originally concerned because both our db and tlog have been as much as 50% larger than what is currently in our db and tlog. For example, our two files are broken down like this:

Size In Use Free
--------------------------
2.7GB 1.5GB 1.2GB
10.4GB 5.5GB 4.9GB

If I am understanding reindexing properly and we have a 2.9 GB index on one table, then it will use much of that free space during reindexing.

Our updated maintenance plan looks like this:
Daily:
1) checkdb
2) backup db
3) backup tlog (tlog backups will run throughout the day as well)

Weekly:
1) checkdb
2) backup db
3) backup tlog
4) file system defrag
5) reindex (not an indexdefrag, using a stored proc >20% or so)

One note about our database usage. Our database has relatively few writes and new rows being added on a daily basis. This database is primarily used for many SELECT's and our primary table is usually being joined to other tables in these SELECT's.

I don't know if it is possible to physically order out data so that common rows are contiguous within the files. Our data is generally time dependant and tied to different organizations. So a SELECT statement will generally read all rows that are within the last four months and belong to organization x.

So I think the goal of fragmentation in our case is to keep the index contiguous to make reads of the index faster and more efficient. I think I am understanding this correctly.

Back to the file groups questions:
1) Moving the indexes to a separate file group will improve fragmentation issues and thus make index access more efficient, correct?

2) Since we have two files existing but in the same file group, I can leave these alone for now even though they are different sizes?

As far as physical server specs and disk I/O, we plan to migrate this to a cluster for HA, so we are placing all db's, tlog's and tempdb on a SAN volume, since they will need to reside on the SAN for the cluster anyway. This way we can work out any SAN related I/O issues prior to moving to our cluster.

Running an OS level defrag is not as necessary if we don't shrink the db files, but it also won't hurt because it should not move the files at all if they have not grown or shrunk.

Thank you Tara for the link to your stored procs. I had already been looking at them earlier from a different topic I was reading.

Now I also need to do more reading into indexdefrag vs. reindex.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-02-23 : 05:08:00
"If I am understanding reindexing properly and we have a 2.9 GB index on one table, then it will use much of that free space during reindexing"

For this approach you will "carry" 2.9GB of slack in your database. It will take that space to rebuild the index, and then it will free up the original space. That free space will get used up (partially at least!) before the next reindex, so the next one may have to entend the DB by a further 2.5GB to get contiguous space (I have no idea if that's how it works, someone could no doubt enlighten me!

DEFRAG works differently (not sure why you are not keen on it? EDIT: From your final point you may not be!) - it reorganises the data in-situ. it does not block access whilst its doing this, and can therefore be aborted (e..g if it is running for too long) - if you abort a REINDEX you have to wait for the ROLLBACK - which could take as long as the REINDEDX itself has been running at that point in time!

"I don't know if it is possible to physically order out data so that common rows are contiguous within the files"

A clustered index will maintain the data in the order of the clustered index - if that helps you?

"Running an OS level defrag is not as necessary if we don't shrink the db files, but it also won't hurt because it should not move the files at all if they have not grown or shrunk"

You might want to have a look at what the files are set to extend by. Particularly if its set to a percentage. IME extending by more than about 100MB at a time cripples our servers if they happen to be busy at the time (so then lots of timeouts start happening and the problem gets worse as web users hit RETRY and so on). And if you are set to a small-ish size, and extensions happen often then defragging the physical files will be of benefit - periodically at least. We only ever do ours during scheduled downtime, but if your servers are not 24/7 then it may be a part of your scheduled maintenance.

Kristen
Go to Top of Page
   

- Advertisement -