Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
Vinnie881
Master Smack Fu Yak Hacker
1231 Posts |
Posted - 2007-05-11 : 01:24:57
|
| I have a application that stores invoices in a specific table TAR_Invoice.Most accounting packages I have seen move closed invoices to a History tableTar_InvoiceHistory or something similiar.Is there any benefit performance wise to move records (It would basically just be deleting the records from TAR_Invoice and inserting into TAR_InvoiceHistory?It would keep the amount of records down, but since I am just deleting the info, and not actually shrinkinging the DB, am I going to gain any performance from the fewer records in Tar_Invoice?What;s everyones thoughts on the best practice? |
|
|
harsh_athalye
Master Smack Fu Yak Hacker
5581 Posts |
Posted - 2007-05-11 : 02:00:10
|
| Scope of search for the SQL Server is definitely narrowed down due to fewer records. Besides, if you have simple recovery model or Full recovery model with regular backups, you should not worry about shrinking the database.Harsh AthalyeIndia."The IMPOSSIBLE is often UNTRIED" |
 |
|
|
Vinnie881
Master Smack Fu Yak Hacker
1231 Posts |
Posted - 2007-05-11 : 02:50:24
|
| I was under the impression that when there are a large number of deletes going on, that I needed to run the dbcc shrinkdatabase command? Is this not the case, I am doing Full recovery with daily backups, but I did not know that would allow me not to worry about shrinking the db? Can you please clarify a little more. Thanks |
 |
|
|
harsh_athalye
Master Smack Fu Yak Hacker
5581 Posts |
Posted - 2007-05-11 : 03:06:43
|
| Vinnie,The general idea is that when you fire dbcc shrinkdatabase to reclaim free space, SQL Server will have to allocate free pages next time you want to insert the data in the table and allocating a page from disk is very costly operation.Whereas, if you don't shrink the database, it will just use the free space left next time it needs to insert the data.Harsh AthalyeIndia."The IMPOSSIBLE is often UNTRIED" |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-05-11 : 04:44:59
|
| Try not to shrink the database. When it regrows it a) is a resource intensive operation and b) fragments the file.If you "Move" records from "Current" to "History" you will create data in the Log file; this may cause that file to extend (but it will settle at a "high water mark"); If you use Full Recovery Model this data will also be in the Tlog backups, but no other issues if you are Simple Recovery model.If you are moving HUGE numbers of rows you may need more sophisticated handling - e.g. a loop processing the data in "batches" of a sensible size, and make sure that Tlog backups are happening VERY frequently during the process to reduce the impact of size growth on the LDF file. This all add complexity to the process ....... but if its only a few 1,000 rows each time I don't think you have anything to work about, or code-against.Personally I would move the Stale stuff to a History table. Performance on the Current table will be constant, and predictable whereas the History table will degrade over time - and that sounds acceptable to me. You can create a VIEW as a UNION ALL of both Current and History table for occasions where you need to search both (and if you are careful to abide by the rules for Horizontal Partitions you should be able to ALWAYS use the View for queries, as SQL Server will only use whichever of the two tables the query ACTUALLY references.Kristen |
 |
|
|
harsh_athalye
Master Smack Fu Yak Hacker
5581 Posts |
Posted - 2007-05-11 : 07:19:58
|
| Not to forget, if you are using SQL 2005, it has pretty sophisticated data partitioning feature which was not the case with SQL 2000.Harsh AthalyeIndia."The IMPOSSIBLE is often UNTRIED" |
 |
|
|
pootle_flump
1064 Posts |
Posted - 2007-05-11 : 07:37:46
|
| The shrink also introduces fragmentation. |
 |
|
|
|
|
|
|
|