Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
gripload
Starting Member
2 Posts |
Posted - 2003-12-05 : 01:37:48
|
In general, when you know that one filter in the WHERE clause will be faster than another, is there a performance gain in breaking the quicker filter into a subquery?In my case, I need to compare an integer and do a string lookup (using some string parsing) on another large table.For example:SELECT * FROM MyTableWHERE Field1 < @Var AND EXISTS ( SELECT NULL FROM AnotherTable WHERE Value = MyTable.Field2 ) Would a better approach be:SELECT *FROM ( SELECT Field2 FROM MyTable WHERE Field1 < @Var ) MyAliasWHERE EXISTS ( SELECT NULL FROM AnotherTable WHERE Value = MyAlias.Field2 ) Thanks! |
|
|
gripload
Starting Member
2 Posts |
Posted - 2003-12-05 : 01:48:39
|
| I wanted to clarify my question:Does it help the Query Optimizer by putting the easier filters in a subquery so they are looked at first, or does the Query Optimizer know to evalulate the simplest filters first when they are strung together using ANDs? |
 |
|
|
jsmith8858
Dr. Cross Join
7423 Posts |
Posted - 2003-12-05 : 07:47:45
|
| i wouldn't use a subquery for this type of condition; i'd use an inner join personally. not sure if it makes a difference but try them all out in QA to see which is faster. I usually list them one after another, make sure show execution plan is checked, and then run them. you will see which one's took which % of the time to execute.SELECT MyTable.* FROM MyTableINNER JOIN AnotherTable ON MyTable.Field2 = AnotherTable.ValueWHERE Field1 < @Var- Jeff |
 |
|
|
|
|
|