Author |
Topic |
miranwar
Posting Yak Master
125 Posts |
Posted - 2006-11-27 : 09:55:57
|
Hi I have the following SQL. The Quey returns around 20,000 rows. The Exhibitior_List table is being accessed via a Non clsutered index. However all the rows are being looked up in the clustered index. The index depth is 2 for the clutered index. So I gather for each value found in the non clustered index there are 2 page reads for the lookup. This appears to be costly as the lookup is costing 76% of the query. The query takes about 1 min 20 secs to run. Is there any way I can aviod this lookup. I'm not sure if I can create a wide covering index for all the rows being accessed in the query. Is this sensible considering there are about 8 or 9 columns. Is there any other strategy i can use to reduce the cost of the lookup. here is the query:select el.VAR_EL_NAME,el.VAR_EL_DESCRIPTION,el.K_EV_ID,el.K_EL_ID,el.VAR_EL_ADDRESS1,el.VAR_EL_ADDRESS2,el.VAR_EL_ADDRESS3,el.VCADDRESS4,el.dRegistrationDate, el.VAR_EL_ALPHABETICALSORT,el.VAR_EL_FAX,el.VAR_EL_TEL,el.VAR_EL_WEB, tc.*,iOrder,vcLastName,vcFirstName, 0 from Exhibitor_List el WITH (NOLOCK) left outer join tblExhibitorParticipant tp (NOLOCK) ON el.K_el_id = tp.iExhibitoriD left outer join tblParticipant p ON tp.iParticipantID = p.iParticipantID left outer join tblCountries tc ON tc.iCountryID = el.iCountryID inner join tblContinent con (NOLOCK) ON tc.iContinentID = con.iContinentID where 0=0 and el.var_el_active <> 'N' and el.K_EV_id =100458 and p. iEventid = 100458 and con.iContinentID=8856 ORDER BY dRegistrationDate |
|
snSQL
Master Smack Fu Yak Hacker
1837 Posts |
Posted - 2006-11-27 : 12:15:31
|
Non-clustered indexes use the clustered index to look up the row, so that is normal. You could create a covering index but with that many columns I doubt it would be better. What are you using for the clustered index - if it is a large index key, then you could consider changing that and it may improve things. I'd suggest starting by running the Index Tuning Wizard (2000)/Database Engine Tuning Advisor (2005). Otherwise, you'll need to post the table structure and the current index definitions. |
 |
|
miranwar
Posting Yak Master
125 Posts |
Posted - 2006-11-27 : 12:35:06
|
I have used ITW but still the query is slow. I need to work on the LookUp. I have created the following composite Index, however executes fast if I only have four columns from exhibitor_list. So i am think of having multiple selects for the remaining columns.CREATE INDEX [test] ON [dbo].[Exhibitor_List] ([K_EV_id], [iCountryID], [Var_EL_active], [Var_EL_name], [Var_EL_Address1], [Var_EL_Address2], [VAR_EL_Address3], [vcAddress4], [Var_EL_Tel], [Var_EL_Fax], [Var_EL_Web], [dRegistrationDate], [VAR_EL_AlphabeticalSort])WITH FILLFACTOR = 90 ,DROP_EXISTINGON [PRIMARY]As an FYI I got the following msg when I created the IndexWarning! The maximum key length is 900 bytes. The index 'test' has maximum length of 1694 bytes. For some combination of large values, the insert/update operation will fail. |
 |
|
snSQL
Master Smack Fu Yak Hacker
1837 Posts |
Posted - 2006-11-27 : 12:45:28
|
I doubt if that is the way to go - what does your clustered index look like? |
 |
|
miranwar
Posting Yak Master
125 Posts |
Posted - 2006-11-27 : 12:49:17
|
clustered index is on .K_el_id which is an identity column |
 |
|
snSQL
Master Smack Fu Yak Hacker
1837 Posts |
Posted - 2006-11-27 : 12:59:18
|
I'd say you should try making the index on K_el_id non-clustered, and make the index on K_EV_id the clustered one. It should certainly improve the query you are working with now, what I can't say is what effect it will have on your other queries and on the insert performance of the table. If you really need to improve this query and the other performance is not badly impacted it could well be the way to go.Also - you should probably try rebuilding the indexes that you currently have, and if you make the change I suggested, then you should schedule a regular index rebuild because it will cause some fragmentation of the clustered index. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-11-27 : 13:01:37
|
How long doesSELECT COUNT(*)FROM(... your whole query, except ORDER BY ...) AS Xtake? If that is quick then part of your problem is network transmission time of the 20,000 rows.Kristen |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-11-27 : 13:04:19
|
This is a problem too:left outer join tblCountries tc ON tc.iCountryID = el.iCountryID inner join tblContinent con (NOLOCK) ON tc.iContinentID = con.iContinentID you've got an inner join off an outer join. And your inconsistent style of the ON statement having the joined columns sometimes on Left sometimes on Right will lead to problems where it takes longer to spot mistakes. Take pride in your code, and be consistent with your styling.Kristen |
 |
|
miranwar
Posting Yak Master
125 Posts |
Posted - 2006-11-27 : 14:02:23
|
Kirsten, the count * query executed in 2 seconds. however i am finding if I break the query down i.e have multiple select statements to cater for the exhibitor list table, then query performance is dramatically improved. so the first select would return the first 4 cols from EL:select el.K_EV_ID, el.VAR_EL_NAME,VAR_EL_ADDRESS1,el.VAR_EL_ADDRESS2,VAR_EL_ADDRESS3,VCADDRESS4 from Exhibitor_List el (NOLOCK ) left outer join tblExhibitorParticipant tp (NOLOCK) ON el.K_el_id = tp.iExhibitoriD left outer join tblParticipant p ON tp.iParticipantID = p.iParticipantID left outer join tblCountries tc ON tc.iCountryID = el.iCountryID inner join tblContinent con (NOLOCK) ON tc.iContinentID = con.iContinentID where 0=0 and el.var_el_active <> 'N' and el.K_EV_id =100458 and con.iContinentID=8856 ORDER BY dRegistrationDate goSecond Select returns remaining cols from EL:select dRegistrationDate, el.VAR_EL_ALPHABETICALSORT,el.VAR_EL_FAX,el.VAR_EL_TEL,el.VAR_EL_WEB,select el.K_EV_ID, el.VAR_EL_NAME,VAR_EL_ADDRESS1,el.VAR_EL_ADDRESS2,VAR_EL_ADDRESS3,VCADDRESS4 from Exhibitor_List el (NOLOCK ) left outer join tblExhibitorParticipant tp (NOLOCK) ON el.K_el_id = tp.iExhibitoriD left outer join tblParticipant p ON tp.iParticipantID = p.iParticipantID left outer join tblCountries tc ON tc.iCountryID = el.iCountryID inner join tblContinent con (NOLOCK) ON tc.iContinentID = con.iContinentID where 0=0 and el.var_el_active <> 'N' and el.K_EV_id =100458 and con.iContinentID=8856 ORDER BY dRegistrationDate goBoth these querys take 7 seconds to execute. The first select runs purley on the covered index which I created above. However , the second query does a lookup because the column Var_EL_Description is a text field, and we we can't index text fields. nevertheless it's far quicker than executing the select all in one go and without the composite index. So what I am thinking of doing is putting the results of both selects in a temp table and doing onef inal join to bring back the resultset. Does this seem sensible to you |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-11-27 : 15:13:50
|
I doubt SQL Server optimised the wrapper COUNT(*) sufficiently that it didn't do most of the inner subquery, therefore I expect most of the problem is the network transmission time.Try doing a query with a similar number of columns / sized columns (make sure that if your "real" query includes TEXT columns you include a representative set of TEXT data in your test), but with no complexity. e.g. SELECT TOP 20000 * FROM MyTableIf that 1min 20seconds then your issue is network performance.Kristen |
 |
|
miranwar
Posting Yak Master
125 Posts |
Posted - 2006-11-28 : 06:24:25
|
OK here are my tests:First test: var_el_description is a text column SELECT TOP 20000 var_el_description FROM exhibitor_list read 2463 logical reads. Query execution time 6 seconds.Second Test (get all rows)SELECT TOP 20000 * FROM exhibitor_list read 2463 logical reads. Query execution time 67 seconds.What I dont get if both queries take the same amount of logical reads why is the second much slower |
 |
|
harsh_athalye
Master Smack Fu Yak Hacker
5581 Posts |
Posted - 2006-11-28 : 06:38:05
|
It's because data is read in pages, so there won't be any changes in logical reads, but in case of second query, amount of data transmitted is considerably large, so it is taking more time to crawl through network.Harsh AthalyeIndia."Nothing is Impossible" |
 |
|
miranwar
Posting Yak Master
125 Posts |
Posted - 2006-11-28 : 07:17:22
|
so the logical reads are the pages that are read from cache into memory. from memory the relevant info is extracted and sent back to the client. Is that correct?It appears that as soon as I add a more tahn 4 columns to the select the query slows down significantly. how can i detect the bottleneck in the network? |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-11-28 : 09:44:40
|
"how can i detect the bottleneck in the network?"Check the NIC settings first and make sure they are all set optimally - Full Duplex, or whatever is appropriate.Can you run a single cable direct between the two machines - so the rest of the network is isolated? Could be something else on the network arguing with your Server or WorkstationKristen |
 |
|
|