Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Long covering index or slow stored procedure?

Author  Topic 

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2008-08-14 : 12:50:37
Hello,

I have a stored procedure which is running slowly. Looking at the execution plan in the query analyzer I saw that the subtree cost is 3.5 and there is a bookmark lookup that takes up 65% of the cost.

I added a covering index, and surely the subtree cost dropped to 1.75 (although logical reads increased, which is another puzzle).

The problem is that the covering index I added includes many columns [4 varchar and a few ints] (had no choice, since the query is using all of them, and I need to cover them all with an index), and they add up to ~ 1200 bytes(?). Although the index was allowed to be added, the warning message came us saying that if the length of the index exceeds 900, inserts might fail. I believe they will fail if the data inserted in a row will exceed 900 bytes (as row cannot be split between 2 pages I believe).

So, although I don't expect any data entered to exceed 900 ... but who knows - it seems unsafe to have this index.

What do people do in this case? Just don't add an index and live with higher query costs?

Please advise
Thanks in advance!

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2008-08-14 : 12:53:30
Could you post the stored procedure code as there might be a way to fix this without this big index? Are you able to test forcing the clustered index to be used?

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

sql_er
Constraint Violating Yak Guru

267 Posts

Posted - 2008-08-14 : 16:44:36
Tara,

Thank you for the suggestion. I was able to achieve the same speed without adding an index. I changed the stored procedure, as suggested.

More specifically, instead of selecting all the fields (which is why i wanted to add a covering index in the first place), i just selected the primary key (clustered index) and stored the result in a temp table variable (it was only top 10 rows anyway). In the following query I just joined that main table with this small temp table using the primary key and retrieved all the other columns. This way the sub-tree cost dropped to 1.8, logical reads decreased from 130,000 to just 648, and no additional index was added.

Thank you!
Go to Top of Page
   

- Advertisement -