Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
niallporter
Starting Member
4 Posts |
Posted - 2006-11-29 : 04:35:10
|
Hi all,I have a transaction-heavy ERP database that runs on a 4x Xeon, 8GB RAM machine. From careful monitoring using performance monitor we've determined that disk I/O is very significantly restricting throughput.Currently the database exists in a single file on a RAID-5 array of Ultra-320 SCSI disks which run at 15,000rpm. We'll be adding some more disks to the array soon so I was thinking about splitting it into two arrays on separate controllers and splitting the database into two files, placing one on each RAID drive to see if that opens up the disk I/O bottleneck at all.What I'm asking though is how do I split an existing database up so it stripes over two files? If I add another file to the database it doesn't get any bigger to start with so I guess SQL Server just starts using it from then on. If I restore it to a new database from a backup then that backup only has one file in it so whichever of the files I choose gets used and the other one seems to disappear. Is there a way of doing what I'm hoping for?Thanks in advance,Niall |
|
Westley
Posting Yak Master
229 Posts |
Posted - 2006-11-29 : 04:54:14
|
Hi Niall, Looks to me that you might look into more options. Looks to me that your data and log files are all stored in 1 RAID5? If so, you might like to split it to RAID 5 for data and a RAID1 for log (not best option but guess thats your only option) since you get getting new disks. Make sure you are using the full 8G RAM, are you using SQL Ent edition with AWE option turn on? If your OS is Win2k then make sure you have /PAE switch along with /3GB switch. For Win2003 you don't need to care about that anymore. Make sure you are using indexes correctly and make sure you do your rebuild index as required. Back to your question for multi-file database, you can just add another file to the same filegroup (make sure the file is the same size). SQL server will use both file to store your data. To really speed up queries, you can have more then 1 filegroup and have different tables store in different filegroup. The idea was the filegroup stored in different HDD/RAID etc, so it can have more IO for SQL. BUt since you are only having 1 RAID5, so even if you split your files, you won't be getting much speed out of it. Try RAID 10 if you have the budget and disks for data, and RAID 1 for log. If possible, another RAID for backup.Hope that helps |
 |
|
niallporter
Starting Member
4 Posts |
Posted - 2006-11-29 : 05:49:29
|
Hi Westley, thanks for the reply,Sorry, I should have been a little more specific. At present the system databases (including tempdb) are on one array, the data files for our user databases are on another and all the log files for system and user databases are on a third. Each array is on a separate controller (one embedded on the motherboard and two PCI-X cards).We are using SQL 2000 Enterprise on Windows 2003 Enterprise. AWE is enabled in SQL Server, it is configured to use 6GB of RAM and reports it's using about that in Performance Monitor.I know the bottleneck is in the I/O subsystem because we commonly see the disk read and write queue performance counters showing there is stuff waiting. When a user is running a large Bill Of Materials calculation in the ERP app we often see their process blocking other processes and at the same time observe the I/O queue counters building up. It's definitely I/O capacity to/from the user database data file array that's limiting server throughput!I'm afraid though that your post doesn't really answer my question. I know I can add another file to the current filegroup and SQL Server will begin using that but how do I get it to rearrange all the data so the existing stuff is striped across the two files? |
 |
|
mr_mist
Grunnio
1870 Posts |
Posted - 2006-11-29 : 07:37:51
|
Make sure that your files are the same size in each case. SQL Server can be fussy about how it allocates data otherwise.-------Moo. :) |
 |
|
Westley
Posting Yak Master
229 Posts |
Posted - 2006-11-29 : 21:12:06
|
Like Moo said, make sure your files are in the same size, as SQL does pick the file which have more space to use first (correct me if I'm wrong, but if I remember correctly, that was from MS :))And you cannot "make" SQL to use both files for every single table, as there is no way that you can "force" it to. If you are really having issues with IO and that is your final finding, what you should do is create another filegroup and add files there, build your table (largest, or most busy) in 1 filegroup, make the indexes on the other. If indexes is not really an issue but joining tables are, then make those join tables in different filegroups, that way, SQL will use different disk to locate data, and IO should not be blocking with each other as its in different disk, and the overall speed will be faster. You might need to find out whats your Bill of ERP app is doing, like which SPs is calling, and which tables is accessing, this might need to have some try and error type to make sure which setting is best. But in general, if you split most heavily join tables into different filegroups, it will surely help (make sure they are on different disk, otherwise its no point).If IO is really your issue, ever think of upgrading to SAN or DAS or something? And since read is your issue (Bill should be read only I assume), RAID5 should be fine for that. What is your avg disk queue length (with read and write) and how many disk you have for your data raid 5? Avg disk sec/read & write? |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2006-11-29 : 23:14:08
|
One simple way to split the data is to create two new files in the current filegroup, one on each disk, and then force SQL Server to move the data to the two new files by executing a DBCC SHRINKFILE with the EMPTYFILE option on the current file. Once the current file is empty, you can remove it. A big advantage of this method is that you can move the data while the DB is active and being used.Make sure the new files both have enough space to hold all the existing data with about a 50% overhead before starting the DBCC SHRINKFILE, so that they do not have to grow.I hope that you haven’t made the mistake of putting all the data in original file in the primary filegroup. If you have, you have a more difficult task, because you cannot remove that file. In that case, you will have to create a new filegroup to move the tables to, and probably have downtime to do it.CODO ERGO SUM |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-11-30 : 01:56:21
|
"I hope that you haven’t made the mistake of putting all the data in original file in the primary filegroup."That would be the default, right? Unless you put ON MyFileGroupafter each CREATE TABLE statement?So would it be a good idea to do that as a standard procedure? Or just when you want to start rearranging stuff?Kristen |
 |
|
Westley
Posting Yak Master
229 Posts |
Posted - 2006-11-30 : 05:10:01
|
As standard, I always leave the filegroup created use for system only, and create another filegroup myself and set that as default, so all new table will created in my filegroup instead. Then I can do whatever without the need to get back to anyone with their code change....or anything to deal with releases....:)more work to be down in the creation of DB tho, but its only a script anyway :) |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-11-30 : 05:24:22
|
Good idea |
 |
|
|
|
|
|
|