| Author |
Topic |
|
Hariarul
Posting Yak Master
160 Posts |
Posted - 2007-07-12 : 05:39:44
|
| Hi Experts,We have a VLDB ( few table with above 200 million records ). This database is used for performance testing by simulating for 150 users and executing all necessary functional flows.When I examined the profiler results , I could see some very high values as shown below in the duration column for many events.152172934621421624325321125512482763903998Does it mean that that SP or the T-Sql statement is taking this much time in milliseconds to execute and give the output ?Any help would be greatly appreciated.Thanks & Regards,Hariarul |
|
|
b.veenings
Yak Posting Veteran
96 Posts |
Posted - 2007-07-12 : 05:43:41
|
| that correct.Need an SQLDB consultant?check www.veeningsengineering.nl |
 |
|
|
Hariarul
Posting Yak Master
160 Posts |
Posted - 2007-07-12 : 05:55:13
|
| Usually the same procedure which takes these high durations (as shown in my post) is executed below 3000 milliseconds.Is the high duration due to the number of users ? Any thoughts to handle this scenario ? |
 |
|
|
b.veenings
Yak Posting Veteran
96 Posts |
Posted - 2007-07-12 : 06:30:11
|
| depends on the situation, the more users are using the same sp or batch the more time there's needed for the proc.Need an SQLDB consultant?check www.veeningsengineering.nl |
 |
|
|
mattyblah
Starting Member
49 Posts |
Posted - 2007-07-13 : 09:51:47
|
| What is the trace t-sql you are running? which event(s) are you running the trace for?Edit: Sorry, saw that you were running profiler. What is the event class? Is it sql batch completed, or sp completed (forget their actual names). If it is, that usually implies thats how long it took. that's a long running query!! |
 |
|
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2007-07-13 : 12:22:16
|
| A single datapoint might not be worth anything.For accurate timing, GROUP BY statement and select min, max, avg and stdev. Now you have more statistics for an accurate assumption.I have come across several SPs where MIN = 300, MAX = 3000000, AVG = 750. That means that only a few executions are taking incredibly long time. In my cases it has always been table variables that "tripped and fell" meaning that the table variable had to flush to tempdb because there were lamost a million records in the table variable. Changing the table variable to a temp table solves the issue every time. After changing to temp table, MIN = 350, MAX = 920, AVG = 510.It also seem to solve the problem with number of reads [by changing to temp table]. In one stored procedure I revised, changing a table variable to a temp table slashed the number of reads from 900000000 to 65000. Yes, I did some other optimizations too.Peter LarssonHelsingborg, Sweden |
 |
|
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2007-07-13 : 12:24:10
|
| Also notice that when returning large resultset, the duration is much larger the CPU, because CPU only has the time to perform the task, and DURATION has the time of complete operation, such as sending the resultset back to QA or SSMS.Peter LarssonHelsingborg, Sweden |
 |
|
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2007-07-13 : 14:10:56
|
| For selecting queries to optimize, I usually pay more attention to CPU, because it seems to be more relevant to the actual load on the system.CODO ERGO SUM |
 |
|
|
influent
Constraint Violating Yak Guru
367 Posts |
Posted - 2008-01-03 : 18:05:21
|
| I'm seeing duration times below 500 in Profiler for a certain stored procedure but when I run it in Query Analyzer it takes 1500 milliseconds. And the procedure returns one row!?!? Why would that be? The network connection between my computer and the db server is as fast as the connection between the IIS server and the db server. |
 |
|
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2008-01-03 : 22:47:25
|
| You can set statistics time on to display the number of milliseconds required to parse, compile, and execute the sp. |
 |
|
|
|