Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
matt_calhoon
Posting Yak Master
235 Posts |
Posted - 2006-05-08 : 02:45:28
|
Hi there,Can you add multiple tempdb datafiles and log files on a live production server without restarting the MSSQL? ANything I should be aware of in doing so?Our db server is slowing down and CPU is running high for MSSQL due to the amount of load. Many of the Applications that run on the web servers use the tempdb and Im hoping that by adding another tempdb data file on another disk it will reduce bottleneck. Anyone else done something similar?SQL 2000 SP3 |
|
druer
Constraint Violating Yak Guru
314 Posts |
Posted - 2006-05-09 : 13:28:24
|
Hey Matt,I just went through this very thing, and yes I was able to add more datafiles for TEMPDB without having to bring down my server. The key that I uncovered (after assuming for a very long time that it wouldn't work based on history) is that you need to create all of the data files with exactly the same size. Apparently the engine behind the scenes first tries to find the largest file, and if it finds one that is "largest" will use that one all of the time. But if there are multiples that are the same size then it appropriately just iterates them for each subsequent process call.According to what I read you should have 1 datafile for each of the processors in your system. The person that instructed me had a hyper-threading system which shows 2 logical processes for each chip and thus I, like her, setup 8 datafiles for our 4 processor machine. The server I did this on has very heave TEMPDB use by the application, and I was immediately able to see I/O shifting among the drives, that previously had been pounding the drive which contained the only TEMPDB data file. The following article describes everything: http://support.microsoft.com/kb/328551In terms of the LOGFILE there is no need for multiple log files, as all processes will use the same log file (just like for any database) until that LOGFILE is full, then it will move on to the next one.Hope it helps,DaltonBlessings aren't so much a matter of "if they come" but "are you noticing them." |
|
|
sureshot
Yak Posting Veteran
72 Posts |
Posted - 2006-05-09 : 19:54:36
|
So you split it into 10 files and the files are on different drives? Or you have some RAID setup where tempdb lives? Curious. |
|
|
druer
Constraint Violating Yak Guru
314 Posts |
Posted - 2006-05-09 : 22:06:13
|
I have a local physical hard drive, 2 physical RAID drive arrays, and 1 SAN LUN that I'm using. I have 8 different TEMPDB datafiles, with 2 being on each of the 4 drives. From what I gathered part of the performance is splitting the file up across drives, the other is simply offering another opportunity for a thread/process to use. So while 2 of the files are on each of the drive locations, each can be accessed by a different process.Hope it helps,DaltonBlessings aren't so much a matter of "if they come" but "are you noticing them." |
|
|
mr_mist
Grunnio
1870 Posts |
Posted - 2006-05-10 : 03:57:29
|
quote: Originally posted by matt_calhoon Hi there,Can you add multiple tempdb datafiles and log files on a live production server without restarting the MSSQL? ANything I should be aware of in doing so?Our db server is slowing down and CPU is running high for MSSQL due to the amount of load. Many of the Applications that run on the web servers use the tempdb and Im hoping that by adding another tempdb data file on another disk it will reduce bottleneck. Anyone else done something similar?
Yep. But is the tempDB usage your bottleneck? Adding files to it won't necessarily help if your problems are being caused elsewhere. -------Moo. :) |
|
|
druer
Constraint Violating Yak Guru
314 Posts |
Posted - 2006-05-10 : 09:13:50
|
In my case it was easy to tell ... I used profiler to trap/see all everything that was going on that took longer than a certain duration. In all cases it was stored procedures that were hammering the TempDB. I then used the Performance Monitor to see what kind of I/O was occuring and I had queuing way over the recommendations and the # bytes/written was high and stayed high. When I created the additional ones and monitored the same I/O on all the drives I could see the original drives figures go down and the others go up so that the load was more distributed. There original comment indicated that they know their application makes heavy use of TempDb as well. Of course that doesn't necessarily mean that the commands they use for TempDB don't also hit tables with 10 joins and no indexes. :)Hope it helps,DaltonBlessings aren't so much a matter of "if they come" but "are you noticing them." |
|
|
|
|
|
|
|