Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
steph2004
Starting Member
3 Posts |
Posted - 2009-06-03 : 13:49:36
|
| Hi,I have this log table having around 100 millions records. Those logs are for the last year web activity.If I query for few days, a week or even a month, the dataset is relatively small. A million at the most. This query is pretty fast. The query looks like this:select count(distinct fk_tbl_visits_id), count(*), sum(nbPages),DATEADD(dd, DATEDIFF(dd, 0, dateadd(hh, 0, date)), 0)from tbl_logs date between '2009-05-18' and '2009-05-25'group by DATEADD(dd, DATEDIFF(dd, 0, dateadd(hh, 0, date)), 0)order by count(*) descBut if a run this query for 6 month of even the year, the dataset is several millions and the query is really slow.The query execution plan is the same with the same indexes used and so on. What slows down the query is the aggregate functions like count(*), sum or even group by.So, is there any trick to speed up group by or aggregate functions of several millions of rows?Thanks for any tips,Stephane |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2009-06-03 : 14:14:18
|
| >>and the query is really slow.How slow? Your full year query is aggregating 100 million rows - that is a lot of work especially with count(distinct..). Even sorting several million rows by the aggregate result is going to be expensive.Be One with the OptimizerTG |
 |
|
|
|
|
|
|
|