I am getting mails in the morning saying, yesterday this job ran for quite long..like it used to run for 30 minutes, but yesterday it took around 2 hrs. I dont see any specific issues with the databases mentioned. There are optimization jobs(reindexing & update statistics), backup jobs etc which ran successfully. From this i understand there is no problem as such with the database.I checked the server cpu, memory etc which all are in the acceptable levels. The other thing which i can think about is ..when the job ran, there might be other jobs running which has some of the tables locked during this time.This would have made the jobs run longer than expected. But next day morning, how do i analyze this issue? Is there anything which i can infer from the log? How do i go about with these issues? What all factors i need to consider in analyzing issues like this. Thanks in advance Sreenath
It's possible that the "update statistics" changed the execution plan to one which doesn't perform as well. Post the current expected execution plan. Otherwise you will have to engage Profiler for the next run and log all system activity during that time in the hope of finding the needle-in-the-haystack.
...Because the 'old statistics' said there was 10 rows in a table....and the 'new statistics' say there are now 10million rows in the table...the data execution plans for those 2 tables would/could differ.
That's the whole purpose of the statistics...to give a reasonable current viewpoint of the state of the nation with the table, to allow the SQL engine to perform at it's optimal....and hopefully the change is better that the original...but SOMETIMES it isn't.
You should also investigate "parameter-sniffing"....search here for it's effects + resoolution.