Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
achamarthy
Starting Member
26 Posts |
Posted - 2004-08-05 : 11:13:40
|
| I have a table which has 1.5 Million records on which a join is performed and filter out and insert 750,000 records into a Temp table. There is further processsing on this tenmp table and the final dataset is 170 records. My question is when i use Temp table to store the initial 750,000 records and process furter querying and get final dataset, it is taking around 20 Mins. But if i store the initial data of 750,000 into a Static Table(Normal table in a DB) and run my final querying against that table it is taking aroud 40 Secs. Why there is so much of difference? I thought Temp table act as Normal tables and perform close to what a static table does.Do i need to specify anything while i create the Temp table to perform in a better way? Let me know if there is a way to make TEMP TABLE to act in a better way. Or, IS IT A BAD IDEA TO USE TEMP TABLE TO STORE 750,000 records? Let me know. Thanks,create table #PRIMARYSET (--[setOrder] [int] IDENTITY (1, 1)primary key NOT NULL, UNSPSCCode [varchar](8) , [SupplierIntegrationID][varchar](36), [ParentAccountName][varchar](100), [AccountName][varchar](100), [Country][varchar](255), [CountryID][int], [RegionID][int], [AnnualRevenue][varchar](100), [AnnualRevenueID][int], [EventDate][DateTime], [ISOStatus][varchar](100), [QSStatus][varchar](100), [NumEmployees][varchar](100), [PrimaryLanguage][varchar](100), [LotItemID][int], [EstSpend] [money], [HighBidInc] [money], [LowBidInc] [money], [SupplierBidInc] [money], [SubLotInterest][int], [SubBid][int], [SubCostBreakdown][int], [Incumbent][int], [AwardedBus][int], [CBEIntegrationID][char](36), [AttributeID][int], [AttributeValue][nvarchar](255), [Score][int], [Lat] [float], [Lon] [float], [LotNumber][nvarchar](50), [PenaltyStatusID] [int], [PenaltyStatus] [nvarchar] (100) Primary Key ( [SupplierIntegrationID], [CBEIntegrationID], [LotNumber], [UNSPSCCode], [LotItemID], [AttributeID], [AttributeValue] )) |
|
|
ditch
Master Smack Fu Yak Hacker
1466 Posts |
Posted - 2004-08-05 : 11:19:45
|
it is a Bad Idea to use a temp table to store large amounts of records.I had a Hell of a night 2 nights ago on standby (only 1 hours sleep).Our server was flatlining no one could even connect to it.After killing all jobs and bouncing the server I found some idiot wrote code to store millions of records in a temp table.For larger amounts of data it is adviseable to use static tables or even better try finding a more elegant SQL solution.Perhaps you could post some of the code here.This place is buzzing with Gurus just wanting to play with code. Duane. |
 |
|
|
Seventhnight
Master Smack Fu Yak Hacker
2878 Posts |
Posted - 2004-08-05 : 11:21:07
|
| I have done similar (So I know it works), just make sure that your temptable has the same indexes the static table has.I would try to see if you could lessen the size you need to store since you are ending up with only 170 records. There must be some other critiria you could use.Corey |
 |
|
|
achamarthy
Starting Member
26 Posts |
Posted - 2004-08-05 : 11:24:47
|
| I thought so...Storing such large data is bad...But since my initial join results in 750,000 i have to store some where temporarily to perform further processing. I guess even storing as an ALIAS is going to hurt performance...You can look at the code at this link...http://www.ent.ohiou.edu/~anil/files/StoredProc.txtinsert into @PRIMARYSET ..... is going to result in 750,000....Anyway let me know if anyone has suggestions...Thanks |
 |
|
|
n/a
deleted
35 Posts |
Posted - 2004-08-05 : 16:52:12
|
| Performance Tips on tempdbBecause of the heavy use of your SQL Server's tempdb database consider locating it on an array of its own (such as RAID 1 or RAID 10). This will allow disk I/O to be more evenly distributed, reducing disk I/O contention issues, and speeding up SQL Server's overall performance. ******If your application uses the tempdb database a lot, and causes it to grow larger than its default size, you may want to permanently increase the default size of the tempdb file to a size closer to what is "typically" used by your application on a day-to-day basis.As you know, every time SQL Server is restarted, the old tempdb database is deleted and a new one is created. If the tempdb is set to a size smaller than what is typically used by the tempdb, and it is set to auto grow, then the tempdb has to grow as needed, which incurs some overhead.By having the tempdb file set to the "typical" size when SQL Server is restarted (and when it is recreated from scratch to the size you set), you don't have to worry about the overhead of the tempdb growing during production.*****Heavy activity in the tempdb database can drag down your application's performance. This is especially true if you create one or more large temp tables and then query or join them.To help speed queries or joins on large temp tables, be sure the AUTOSTATS database option is turned on for tempdb, and then create one or more indexes on these temp tables that can be used by your query or joins. This means that you will need to create the temp table, and then add the appropriate index(s), for the temporary table(s) you create. In many cases, you will find that this can substantially speed up your application. But like many performance tips, be sure you test this one to see if it actually helps in your particular situation. In some cases, the overhead of creating the index(s) is greater than the time saved by using them. Only through testing will you know which option is best in your situation. HTHPaul |
 |
|
|
kselvia
Aged Yak Warrior
526 Posts |
Posted - 2004-08-05 : 20:17:23
|
| You mean a table variable, not a temp table, and they are much different above a certain size. There are no statistics kept for table variables, and you can't create indexes on them. SQL Server can not properly optimize queries on table variables. Worse yet, they are actually stored to disk once they reach a certain threshold so if you thought you were getting some kind of "in memory" table, you are probably not with 750,000 rows ;)You should probably change it back to a #temp table and consider adding indexes on the key columns of your joins.--Ken"Knowledge is a process of piling up facts; wisdom lies in their simplification." |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2004-08-05 : 20:21:54
|
| MS documentation suggests that table variables should only be used when the amount of data in the table is less than 10,000. So for 750,000 rows, you certainly don't want to use table variables.Tara |
 |
|
|
|
|
|
|
|