Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
AskSQLTeam
Ask SQLTeam Question
0 Posts |
Posted - 2006-07-14 : 08:56:31
|
Ray writes "Hi AllWe currently have a database on a server with 3GB of memory, the server is configured to sue dynamic memory up till 2.4GB as the maximum, it also has 4 x Xenon CPU’s.The users are always complaining of extremely slow queries and slow response times, I am not sure whether to configure SQL to use a limited number of CPU’s eg use only 3 CPU’s and the operating system can use the rest.Would there be any performance gains here ?The other question is that, we replicate huge tables between databases by means of BCP out the tables, then the receiving server Bulk Inserts into a database. At the moment, its so slow to transfer data between the 2 servers even though indexes are being taken off the receiving server, is there anything that we can do to Bulk Insert to boost it, or is our replication strategy flawed ?Lastly, on a lot of my databases, around 1.8 Terrabytes, the data file and transaction log files are on the same drive on the SAN, what does this mean performance wise ?Looking forward to hear from you.Thanks" |
|
Gawaine79
Starting Member
6 Posts |
Posted - 2006-07-14 : 15:49:23
|
Ok so let me take a stab at this :Dif SQL server is taking up all of your processing power reducing the number of cpus it can use would only slow the SQL server portion of the system and allow more resources to the operating system. So i don't think there would be any performance gains by doing this. You probably want to user perfMonitor to see what resources are being maxed out.as far as the two huge tables...not sure if it's full copy of the tables or just differences but potentially you could use dts...and in dts you could set how many records per set of data it loads...for example if you have a table with 1 million rows that you need to transfer..you can set it to transfer in sets of 1000 for more managable pieces which can be quicker than loading 1 millions rows all at once depending on your system resources.on the san issue it depends on the configuration of the san...if you have lots of fast disks in one big huge array it may not matter very much. However if you had single or only a few disks in making up the san it would matter because basically the number of IO's your disks can handle is based on how fast they spin and how many of them there are.... so if you have 500 disks making up one drive you will perform a larger number of reads/writes at the same time if everything were spanned evenly across all disks than if you only had 2 disks making up the drive. You should be able to look at your san software and see if it's being maxed out on IO's on the drive. Hope it helps |
 |
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-07-14 : 16:33:56
|
Also, give this a read:http://www.sql-server-performance.com/sql_2000_tools_tutorial.aspYou need to use perfmon to determine where the bottleneck is. Is it CPU? Disk? I suspect disk in this case. You probably don't have enough drives or a good enough RAID config to push the amount of data you are talking about here.Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda>Opinions expressed in this post are not necessarily those of TeleVox Software, inc. All information is provided "AS IS" with no warranties and confers no rights. |
 |
|
|
|
|
|
|