Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 Transact-SQL (2000)
 understanding execution plan

Author  Topic 

cognos79
Posting Yak Master

241 Posts

Posted - 2007-01-02 : 15:51:42
can anybody explain wht each attribute means in execution plan and also in wht terms do you use the values say estimated cost: 0.003283(99%). wht does it mean.

Estimated Row Count: 1

Estimated Row Size: 18

Esitmated I/0 Cost: 0.00320

Estimated CPU Cost: 0.000080

Esitmated number of executes: 1

Estimated Cost: 0.003283(99%)

Estimated Sub tree cost: 0.00328

Thanks

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2007-01-02 : 20:07:15
may want a look at http://www.sql-server-performance.com/nb_execution_plan_statistics.asp

also, and this might be more info than u want right now: http://www.sql-server-performance.com/sql_server_performance_audit.asp
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-01-03 : 03:57:15
I don't know if it is just me, but I don't find all that stuff very useful.

I just use the Stats output to see the number of LOGICAL Scans and Disk I/Os and worry about the ones that have big numbers. Generally the choices are pretty limited (e.g. try adding an index), and the improvement is either dramatic or non-existent!, and the simple Statistics report covers that for me.

I also look at the TEXT Query Plan to see which bits are using indexes, and which bits aren't, to guide me to which parts of the query look inefficient and can most likely be improved.

-- Comment in the SHOWPLAN to see the Query Plan, OR the STATISTICS to see the logical stats
-- SET SHOWPLAN_TEXT ON
GO
-- SET STATISTICS IO ON; SET STATISTICS TIME ON

-- ... put query here - e.g.:

SELECT * FROM Northwind.dbo.Products

SET STATISTICS IO OFF; SET STATISTICS TIME OFF
GO
SET SET SHOWPLAN_TEXT OFF
GO

See http://www.sql-server-performance.com/statistics_io_time.asp for more details

Kristen
Go to Top of Page

cognos79
Posting Yak Master

241 Posts

Posted - 2007-01-03 : 11:33:06
thanks kristen...you provided me insight about using statistics...

i am trying to execute a simple query


select * from test
where uniqueid = 1

when i use "SET SHOWPLAN_TEXT ON" on this query its saying "SET SHOWPLAN statements must be the only statements in the batch." so wht does it mean
Go to Top of Page

cognos79
Posting Yak Master

241 Posts

Posted - 2007-01-03 : 11:50:49
kristen...i found the answer...dont worry about my previous question.

one more question: If a table has only 3 records and i have a clustered index on it. when i read the statistics will it be different if when the table has 3 million records? I dont have table with 3 million records rite now. so can i create indexes based on using 3 records and will the same rule applies to 3 million records?

Go to Top of Page

harsh_athalye
Master Smack Fu Yak Hacker

5581 Posts

Posted - 2007-01-03 : 11:55:08
Not really. How do you expect IO reads to be same for 3 records and 3 million records. Also, it might be case that for 3 records, optimizer may decide not to use index at all, if table scan is cheaper than index lookup.

Harsh Athalye
India.
"The IMPOSSIBLE is often UNTRIED"
Go to Top of Page

cognos79
Posting Yak Master

241 Posts

Posted - 2007-01-03 : 12:27:00
thanks harsh...that makes sense...but how can i test for 3 million records...a table with 60000 records might out have any problem...but in furture with 3 million records a problem might arise...how can i approach this situation...is there a way
Go to Top of Page

jsmith8858
Dr. Cross Join

7423 Posts

Posted - 2007-01-03 : 12:32:38
comparing 60,000 to 3 million makes much more sense than comparing 3 to 3 million. If it works fine for 60,000 rows, it should be fine with 3 million. You'll always need to occasionally monitor and tweak indexes and joins and such now and then as your data grows and changes. That's what DBA's are paid to do! If not for stuff like that, then all they would do spend 1 minute a day checking back up logs to ensure that they ran ... (then again, that's what 95% of the DBA's out there do anyway, I suppose ....)

- Jeff
Go to Top of Page

cognos79
Posting Yak Master

241 Posts

Posted - 2007-01-03 : 12:43:07
oh ok...thanks smith...I am new to database...so asking...
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-01-03 : 13:54:33
60,000 and 3,000,000 are fine provided that the distribution of data is about the same.

For example, if in your table you have an index on a YES/NO column (daft usually, but to take an extreme example!) and 99% of the rows have a YES then a query looking for NO will most likely use the index.

If the 3,000,000 rows actually have 99% of the rows with a NO value then the query will most likely NOT use the index ...

Make sure the statistics are updated before you make your tests - otherwise the query plan might be based on a Stale set of statistics, and the optimiser might therefore make different choices.

Kristen
Go to Top of Page

cognos79
Posting Yak Master

241 Posts

Posted - 2007-01-03 : 14:07:04
kristen...wht do you mean by statistics?
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-01-03 : 16:27:01
"wht do you mean by statistics"

SQL Server maintains statistics about each index - and uses them to decide whether a query like

WHERE MyCol = 'AAA'

is likely to yield lots, or few, results from the index. If "Few" it will use the index, if "lots" it won't.

UPDATE STATISTICS can be used to freshen the statistics so that they exactly represent the current data. It is generally assumed that the statistics will remain reasonably valid whilst Inserts, Updates and Deletes occur, but they need to be Updated periodically - and especially after inserting lots of test data for the express purpose of testing the Query Plans!

Kristen
Go to Top of Page

snSQL
Master Smack Fu Yak Hacker

1837 Posts

Posted - 2007-01-03 : 18:04:53
Something that is useful from the execution plan is to look at the estimated row count vs the actual row count. If they are very different then you could have one of two problems
1. Out of date statistics
2. Bad expressions in your WHERE clause

In the WHERE clause, if possible your expressions need to take the form: column = something
So this is good: CustomerID = 'ALFKI'
But this is not: CustomerID + 'junk' = 'ALFKIjunk'

If you run these two queries in the Northwind database you'll see what I mean
SELECT *
FROM Customers
WHERE CustomerID = 'ALFKI'
SELECT *
FROM Customers
WHERE CustomerID + 'junk' = 'ALFKIjunk'

Note that the first has estimated and actual row counts of 1, whereas the second one estimates 29 rows and has an actual of 1. The problem is that SQL Server cannot optimize an expression with calculations on both sides of the operator so it guesses how many rows will be returned and then creates a plan (in this case a bad plan) based on the guess. Expressions that can be optmized are often referred to as sargs, or as being sargable. If you can rewrite expressions to make them sragable then the query will perform better.

There's much more to it than that, but hopefully that example will help to understand the plan a little better. Inside SQL Server by Kalen Delaney (Microsoft Press) has a very nice discussion about sargs.
Go to Top of Page
   

- Advertisement -