Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
sfjtraps
Yak Posting Veteran
65 Posts |
Posted - 2014-03-18 : 16:21:39
|
I want to know a strategy to query a large data set of serial numbers (possibly 50,000) quickly to verify if a serial number is a duplicate. My thought was to reduce the query by creating a temporary table based on the least significant digit in the serial number, and then query the serial number against the temporary table. Or, possibly create two temporary tables to reduce the data set twice (the first based on the least significant digit and the second based on the 2nd to least significant digit), and then query the serial number against the second temporary table.Any thoughts on this strategy and how to do it? |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2014-03-18 : 16:37:34
|
If the query is properly indexed, I don't see any need to create temporary tables. 50,000 rows is tiny.Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
 |
|
sfjtraps
Yak Posting Veteran
65 Posts |
Posted - 2014-03-18 : 16:42:28
|
If I have 50,000 serial numbers and I need to query 24 serial numbers individually against the 50,000 to make sure none of the 24 are duplicates, would that take less than 10 seconds? |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2014-03-18 : 16:45:18
|
If the tables are properly indexed and the query is properly designed, then yes it should be less than 10 seconds. 10 seconds is an eternity in my world. We measure performance at the millisecond level. I've got one system that defines a timeout as anything over 300 milliseconds.Tara KizerSQL Server MVP since 2007http://weblogs.sqlteam.com/tarad/ |
 |
|
sfjtraps
Yak Posting Veteran
65 Posts |
Posted - 2014-03-18 : 16:47:12
|
Thank you for assistance! |
 |
|
|
|
|