| Author |
Topic |
|
logmaster
Starting Member
5 Posts |
Posted - 2007-07-04 : 09:08:25
|
| Hi,We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)Any advice will be greatly appreciated.Thanks,Joe |
|
|
logmaster
Starting Member
5 Posts |
Posted - 2007-07-04 : 09:25:51
|
| Update:I have noticed under Activity Monitor->Process Info that there are 3 processes under the database that I am using that are in suspended mode. 2 of them have a wait type of PAGEIOLATCH_SH and one: CXPACKETDoes anyone know what does that mean? |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-07-04 : 09:29:15
|
Maybe the Transaction log is being extended, and that is taking time. This won't happen next time you run the process (unless it gets SHRINK'd in the meantime, natch!)Maybe this is all one "transaction", in which case "Committing" in smaller batches (if that won't breach your data integrity rules) would help.Maybe you are using FULL Recovery model and the TLog file is having to be extended to accommodate all the changes you are making.Maybe your database is having to be extended to allow the records to be stored elsewhere once they are updated (maybe they are getting larger, and won't fit back in their original position)Maybe the column(s) you are Updating are indexed, and the indexes are having to be updated as well as the data (or the data is being moved, as describe above, which will mean that the Clustered Index has to be adjusted or, if there is NO clustered index then ALL Indexes will have to be adjusted). Also, if the indexes are being updated then maybe the Stats (for the index) are going out of shape, or you are causing lots of fragmentation of the table, which then takes longer to search.Maybe caching is being adversely effected, in which case clearing the cache would help (but will kill the cache for all other users / processes of course)."if I change the Max Degree of Parallelism ..."I reckon that might be clearing the cache, or somesuch which is then causing data to be reloaded.But better by far would be to do the update in one go (or at least in batches of at least 1,000 or 10,000), rather than a single record at a time! For that you could accumulate all the changes you want to make, store them in a temporary table (the PK of the record to be changed, and then a column for each column in the table that needs changing) and then update en masse:UPDATE USET MyCol1 = T.MyCol1, MyCol2 = T.MyCol2, ...FROM MyTable AS U JOIN #MyTempTable AS T ON T.MyPK = U.MyPK Kristen |
 |
|
|
logmaster
Starting Member
5 Posts |
Posted - 2007-07-04 : 10:30:05
|
| Kristen, hiThanks for your detailed reply. The log file is set to a simple mode.I am going to test each one of the possible scenarios you have presented however I want to start with the last one that talks about the design.I am going to modify the application to perform batch updates, which one of the following ways is more recommended:1. Create an array that will store all values per update (PK+data to be updated),once completed, insert the data from the array to a temp table and then update main tableor2. Insert records to temp table, 1 row per update and then update main table?Regards,Joe |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-07-04 : 11:43:50
|
| Doing (2) will have the same performance issues as your original.Inserting an "array of data" into a table depends a bit on what your application language can do easily, and efficiently. Failing that you can use a delimited string of all the data and "split" the string at the server, that's pretty efficient (and there is code here which will give you a leg-up)Kristen |
 |
|
|
logmaster
Starting Member
5 Posts |
Posted - 2007-07-04 : 19:45:33
|
| Hi,I have modified the code however I have realized that the server slows down even before it starts writing the results to the table.I have also tested the size of the log and database when it happens and made sure that the issue got nothing to do with the expansion of the DB or log file. The only workaround that actually works is clearing the cache by using DBCC FREEPROCCACHE every time the server slows down. This is not my preferred solution however I can't seem to find a solution to this issue.Is there anything else I can do to resolve this issue without having the clean the cache?Regards,Joe |
 |
|
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2007-07-04 : 21:01:58
|
| What's total db size and amount of memory on the server? May need more memory. |
 |
|
|
logmaster
Starting Member
5 Posts |
Posted - 2007-07-04 : 21:16:24
|
| Hi,Total DB size is 3GB total ram is 4 GB for the system where 2GB are allocated for SQL server. The application is the only user of the database server. Do you think that memory might be the issue? |
 |
|
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2007-07-04 : 21:26:24
|
| Shouldn't memory issue in this case. Seems cached execution plan is out of date. Does the db change a lot? How often do you rebuild index or update statistics? |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2007-07-05 : 05:47:56
|
| If you are using an Sproc you could use WITH RECOMPILE to force a new execution plan each time. Surprised that this would be needed for a simple INSERT though ...(I think you can force recompile of adhoc queries too)"Do you think that memory might be the issue?"I agree with rmiao that it is unlikely. You could check SQLServer:Memory Monitor : Target and Total Server Memory in Performance Monitor. Total should be less than Target (mostly, over a period of time)Kristen |
 |
|
|
|