Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
johnsql
Posting Yak Master
161 Posts |
Posted - 2006-12-09 : 09:36:26
|
Hi, I have a question about the performance of database. Overtime, rows are inserted into a table and the table becomes bigger and bigger. So, if we perform a query command like “SELECT” or “UPDATE” or whatever, because the table are bigger than before, then processing time of queries or database performance are worse accordingly??? Selecting rows must consume more time to filter rows among rows whose population is larger than before??? If so, what are the best solutions to solve the problem?ThanksJohnsql |
|
jezemine
Master Smack Fu Yak Hacker
2886 Posts |
Posted - 2006-12-09 : 09:58:27
|
Beyond a certain size, the bigger the table, the worse perf will become generally. You need to give a lot more thought to indexes, etc if your table has 100m rows compared to 100 rows. And if you are reduced to a table scan (for doing a whole table aggregate over some column, for example) be prepared to wait (hope you have fast disks!)The biggest table I have ever had to work with had 2b rows, and it was a real pig. Not a transactional table, rather it was the result of a large calculation. so I could index the heck out of it. still, things slow down when you are working with 2b rows.Do you really need all that data in there? And how many rows are we realistically talking about here? Perhaps archive off what you don't need in real time. http://www.elsasoft.org |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-12-10 : 01:18:33
|
Check the query plan of your queries and make sure that you do not have any unexpected table scans - e.g. you forgot to index something important, but it ran really fast with the 10 rows in the test system!Other than that don't worry about it until tables get to 100,000's of rows. And even then you probably don't have to worry about it until tables get to 1,000,000's or even 10,000,000's of rows - if you got Step 1 correct! But as they grow it is worth checking the query plans again to make sure that none of them are going off the rails.We check for slow-running queries. We do this in two ways: the queries with the slowest average run time. Some of these are massive Sprocs and take 10 minutes to run anyway, so they aren't much help.We also check total elapsed time / day with average execution time. This leaves out an Sproc that takes 10 minutes but only runs once an hour in favour of the ones that run 200,000 times a day for 500ms each, and any optimisation we can make for those gives us the biggest bang for buck.You'll also need to start worrying about good housekeeping at that point - index rebuild / defrag, update statistics / buy faster hardware!!Kristen |
 |
|
|
|
|
|
|