Author |
Topic |
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-20 : 19:14:46
|
We have a server set up with Windows 2000 Server and SQL Server 2000. Using a test table it can do 2000 inserts, selects, and deletes in about 6 seconds.The server has been just a little slow lately, so we purchased a new one. It has 4GB of high speed memory a quad-core processor and 16000 RPM hard drive. It should be much faster than our current server. But running the same test mentioned above took 20+ seconds.We tried a lot of different things to find a solution. It seems every computer that has had SQL Server installed for a while can run the test in a decent amount of time. But besides the new server already mentioned we also tried doing a fresh install on a separate computer and it to was very slow.Does SQL Server have to be broken in? What doesn't make sense is the test table was a new table in each one of these databases.We have mostly ruled out any network problems. We installed MySQL on the same server and it had decent performance comparable to our current server. We even tried installing an evaluation of SQL Server 2005 but it too was slow.What am I missing? Any help or hints would be greatly appreciated. |
|
Kristen
Test
22859 Posts |
Posted - 2007-09-20 : 20:25:10
|
Are you doing maintenance of your database - such as Reindex and Update statistics?"It has 4GB of high speed memory ..."SQL 2000 won't use more than 2GB, unless you have Enterprise version of SQL Server 2000.SQL 2005 can use as much memory as the operating system (which will be 4GB unless you use Enterprise version of Windows Server)"Using a test table it can do 2000 inserts, selects, and deletes in about 6 seconds."If you mean:Insert 2000 rows, then select them, then delete themthen I think taking 6 seconds is very slow. (Except that if the SELECT is displaying all the data on the screen then that may take several seconds alone, and has nothing to do with SQL Server performance!Kristen |
 |
|
shaunc
Starting Member
28 Posts |
Posted - 2007-09-20 : 23:30:41
|
Here are the steps I'd take to troubleshoot this problem.First, check the SQL Server Logs or the Windows Event Log to see if SQL Server is complaining about anything. I've run into some really obscure warnings and errors in there that took a long time to debug. In some cases they were hardware specific, down to the CPUs installed.Next, compare the database options (right-click the DB in Enterprise Manager and go to Properties) between the old, "it works, but it's slowing down" server and the new "it's a beast but it runs slower than the old box, so something is really wrong" server. If you find any discrepancies, set them to how the old server's options were established.Following along with Kristen's suggestion, I would recommend that "Auto Create Statistics" be enabled on most if not all of your databases. I also suggest that "Auto Update Statistics" be turned on for any database where the majority of tables see far more SELECTs than they see INSERTs/UPDATEs/DELETEs. If these options were disabled, turn them on at your discretion and then runEXEC sp_msForEachDB 'DBCC UPDATEUSAGE(?)'You should set up a weekly job to perform this maintenance. If your databases don't get hit much overnight, it probably won't hurt to schedule this as a nightly job.Make sure that Auto Close is not enabled on your databases. If this option is turned on, there can be significant setup/teardown overhead involved in every query. It's possible that your old server was configured to use a large amount of RAM, and your new install is set to allocate RAM dynamically but is not doing a satisfactory job of that. Use the Task Manager to figure out what the SQL Server process is using. If it's low, you may want to set a manual memory config in the server properties. As for what counts as "low," compare to your old server.If none of the above helps, login to both the old server and its replacement, paste the slow query into QA, and look at the query plan to see if something has changed. |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-20 : 23:31:46
|
I even tried it on a brand new database with only one table. It is on a fresh install of Windows too, with nothing else installed. It still takes 20 seconds on the new server. I am inserting them, selecting them, and deleting them one at a time. In other words, I connect to and disconnect from the database 6000 times. That is why it takes 6 seconds on the current server. That seems pretty reasonable, but not 20 seconds on a server that should be better.Tomorrow we are going to try and isolate the server off of the network. Just making a network between the database and one client computer. That will eliminate problems related to our network. |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-20 : 23:34:51
|
Well, thanks Kristen and Shaunc. Tomorrow I will try your ideas and probably a few other things. I will report back how it went. |
 |
|
shaunc
Starting Member
28 Posts |
Posted - 2007-09-20 : 23:39:56
|
In the event of 6,000 distinct connections, the first thing I would look at on my list above is the Auto Close setting. Make sure that's turned off. Good luck! |
 |
|
rmiao
Master Smack Fu Yak Hacker
7266 Posts |
Posted - 2007-09-20 : 23:43:15
|
Did you compare perfmon results between two servers while testing your queries? Does table have index? Try set showplan_all and statistics i/o on when run queries then compare results. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-09-21 : 04:12:21
|
"If your databases don't get hit much overnight, it probably won't hurt to schedule this as a nightly job"My policy goes a but further than that! If a server has slack time overnight we always set up Reindex / Update Stats etc. to run EVERY night - why wait until Sunday (say) before the database is back to full power? "In the event of 6,000 distinct connections ..."The network/LAN connectivity to the New Server may be dramatically different to the Old Server, and would hugely effect the performance of this type of test.I suggest you check the raw network performance separately from SQL Server.Kristen |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-21 : 09:46:17
|
We took the server off the network completely, and ran the test directly on the machine. The performance was about the same as on the network. So it is not a network issue.We checked the logs, and found no problems.We tried "Auto Create Statistics" both on and off.We made sure "Auto Close Connection" was not set.We tried manually setting the memory usage, using a couple different configurations.We compared database options with our current server, and they seem to be the same.I ran something to test the processor, and it did comparably well compared to the current server, so it isn't that.Right now, we are trying to eliminate any memory problems. We ran memtest and it is only showing a 64k level 1 cache and unknown level 2 cache. If that is right, that I imagine that could be part of the problem.Below is the table structure:CREATE TABLE [Profile_Test] ( [test1] [int] NOT NULL , [test2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , CONSTRAINT [PK_Profile_Test] PRIMARY KEY CLUSTERED ( [test1] ) ON [PRIMARY] ) ON [PRIMARY]GOThe profile code: Public Sub ProfileDB(Optional ByVal Quit As Boolean = False) SQL_NonQuery("DELETE FROM Profile_Test") Dim testcount As Integer = 2000 For Index As Integer = 1 To 200000 Dim lasttime As Integer = 0 Dim str As String Dim profiler As New Stopwatch profiler.Start() Console.WriteLine("===== Profile #{0} =====", Index) 'Inserts For x As Integer = 0 To testcount - 1 SQL_NonQuery(String.Format("INSERT INTO Profile_Test(Test1, Test2) VALUES('{0}', '{1}')", x, "XXXXXXXXXXXXXXXXXXX")) Next lasttime = profiler.ElapsedMilliseconds Console.WriteLine("INSERT: {0}", profiler.ElapsedMilliseconds) 'Selects For x As Integer = 0 To testcount - 1 str = Get_Value_SQL("UPDATE Profile_Test SET Test2='ZZZZZ' WHERE Test1='{0}'", x) Next Console.WriteLine("SELECT: {0}", profiler.ElapsedMilliseconds - lasttime) lasttime = profiler.ElapsedMilliseconds 'Deletes For x As Integer = 0 To testcount - 1 SQL_NonQuery(String.Format("DELETE FROM Profile_Test WHERE Test1='{0}'", x)) Next Console.WriteLine("DELETE: {0}", profiler.ElapsedMilliseconds - lasttime) profiler.Stop() Console.WriteLine("TOTAL: {0}", profiler.ElapsedMilliseconds) Next If Quit Then End End If End Sub |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2007-09-21 : 10:44:59
|
Did you do any disk IO benchmarks? If the disks on the new server are not performing correctly, that could cause a problem.You can download the free SQLIO utility from Microsoft and use it to benchmark the IO performance on both the old and new servers.I always do a benchmark with SQLIO on each disk on a server before I even install SQL Server, just to make sure there is no problem.CODO ERGO SUM |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-09-21 : 11:19:51
|
"We took the server off the network completely, and ran the test directly on the machine. The performance was about the same as on the network. So it is not a network issue.We checked the logs, ..."That all sounds pretty good."We ran memtest and it is only showing a 64k level 1 cache and unknown level 2 cache. If that is right, that I imagine that could be part of the problem."We have one server with the largest CPU cache that was available at the time, and cost lots more $$$ than our other servers, and it really FLIES compared to our other boxes. So that could be pretty significant Kristen |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-21 : 11:29:13
|
Thanks Michael. I tried using SQLIO. The results were impressive.The current server*:I/O Per Sec: 773.56M/B Per Sec: 1.51The new server with the "problem":I/O Per Sec: 8786.79M/B Per Sec: 17.16Doesn't seem that is the problem! I will look more into finding what the cache size is.*- Was running in production at the time, but I still feel the results are pretty indicative of the performance. |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2007-09-21 : 11:46:41
|
quote: Originally posted by Kyle.Stevens Thanks Michael. I tried using SQLIO. The results were impressive.The current server*:I/O Per Sec: 773.56M/B Per Sec: 1.51The new server with the "problem":I/O Per Sec: 8786.79M/B Per Sec: 17.16Doesn't seem that is the problem! I will look more into finding what the cache size is.*- Was running in production at the time, but I still feel the results are pretty indicative of the performance.
What were the parameters for that test? The MB/Sec numbers look very low for both, but that number depends on the test parameters (8K/64K/128K, sequential/random, read/write, etc.)CODO ERGO SUM |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-21 : 11:57:19
|
I ran the default test. It came with a file called param.txt and inside it said:c:\testfile.dat 2 0x0 100Here is the output from the test (ran on my computer, but the parameters should be the same).C:\Documents and Settings\kyle.NEPCO2>\\buffalo02\share\IT\sqlio.exesqlio v1.5.SG1 thread reading for 30 secs from file testfile.dat using 2KB IOs over 128KB stripes with 64 IOs per runinitialization doneCUMULATIVE DATA:throughput metrics:IOs/sec: 489.43MBs/sec: 0.95 |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-21 : 11:58:46
|
We discovered the cache information:L1: 32k * 4L2: 4MB * 2This is better than our old server, so I don't think the problem is cache related. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-09-21 : 12:06:29
|
I think the point you raised about "SQL Server seems to slow down after a while" might be relevant.It sounds like there is some housekeeping that you aren't doing.Is the database in FULL Recovery model, and you are not making any TLog backups, perhaps?It might help to know what your skill level is, if you know all backup Recovery Model, Reindex/Defrag and Update statistics I won't labour the point!!Kristen |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-21 : 12:13:55
|
Kristen,I didn't say SQL Server slows down after a while. You must have misunderstood something I said. That is the mysterious part. The server we have had up for a while runs the tests really well, it is the brand new freshly installed server that is running slow. And it is with a one table database made just for the purpose of testing. I don't think there is any house keeping to do on a brand new database with just one table containing only two columns.My skills in SQL the language are pretty good, but not in SQL Server administration. Everything you are saying about reindex/degrag/update statistics sounds great, and I would like to try it on our database. But I don't know that it is applicable to a brand new database with just a simple table, do you?Thanks for all your help. We thought this would be a one hour job to switch servers, and it has turned into a long process! |
 |
|
Kyle.Stevens
Starting Member
26 Posts |
Posted - 2007-09-21 : 12:19:32
|
Kristen,Full recovery mode was on. I changed it to simple. I didn't see any differences in the test though. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-09-21 : 12:43:01
|
Sorry, got it the wrong way round, it was your comment "Does SQL Server have to be broken in?" that I was thinking of."But I don't know that it is applicable to a brand new database with just a simple table, do you?"Well, you are stressing the table quite a lot by mega-inserts, selects, and deletes. In the real world the core data would normally persist in a table, the Indexes and Statistics for the table would get optimised (perhaps once a week, whatever!) and then would be in reasonably good shape thereafter as a small percentage of new data was added each week.Inserting thousands of rows into an empty table will completely skew the statistics for that table. They might cause numerous "auto update statistics" to run to try to keep the ship on an even keel! and those could be eating up precious CPU etc. But I'm just guessing really.Might be worth you restoring a backup of your normal database onto the new server and doing some "real world tests", just in case your mega-inserts test is skewing things.If you restore a Backup onto the new server you will get the identical index-fragmentation and statistics-updated-ness as the original server when the backup was taken, so you don't need to worry about housekeeping as such, both would be the same, you would definitely be comparing apples-with-apples (Only possible issue is that one or other server may have greater PHYSICAL fragmentation of the database files, but the restore onto a brand new server with acres of disk space will probably create the files using a single contiguous piece of disk space, so the restore-a-backup approach should slightly favour the new server.)You could capture a broad spectrum of SQL Statements, using SQL Profiler, and just try cut&paste some of those into Query Analyser and "replay" them. Obviously you need to be a bit thoughtful about any that Update/Delete etc.! but you can obviously do those on the New Server, so long as they are not dependant on any recently created/changed data that didn't exist when the backup was taken.SQL Profiler can show the elapsed duration, so if you have that as part of the "capture" from the old server, and also run SQL Profiler on the new Server during the "replay", you can probably compare the Duration quite easily.Things that select loads of records are a bit tricky, because if you run the in Query Analyser you also have the Display Time to take into account, so you might want to select to File or somesuch to mimic the real worldKristen |
 |
|
Zoroaster
Aged Yak Warrior
702 Posts |
Posted - 2007-09-21 : 13:33:45
|
Might be a dumb question but did you make sure that the service pack levels for O/S and SQL Server are the same on the old and new servers? Also, did you make sure you don't have virus scans running on any of the SQL Server directories? Future guru in the making. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2007-09-21 : 13:40:49
|
"Might be a dumb question ..."Nah, they are good points |
 |
|
Next Page
|