Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 Site Related Forums
 Article Discussion
 Article: Benchmarking Disk I/O Performance: Size Matters!

Author  Topic 

AskSQLTeam
Ask SQLTeam Question

0 Posts

Posted - 2007-10-30 : 07:24:34

This article discusses the impact of test file size when benchmarking SQL Server disk performance.



Read Benchmarking Disk I/O Performance: Size Matters!

powell
Starting Member

3 Posts

Posted - 2007-11-02 : 01:03:22
Good article. What happens when you take a huge file and separate it into several smaller files. Is your IO guaranteed to improve. For instance, if you have a huge mdf file, say 500GB and you notice high disk queue length, if you break that file into one four data files but they are all still on the same logical drive, will your values for disk queue length drop.
Go to Top of Page

lshea
Starting Member

5 Posts

Posted - 2007-11-02 : 11:15:07
If everything else remains the same, having multiple files would not change the disk queue length. After all, as far as the disk drive is concerned, you are doing the same amount of I/Os. In fact, you can test this out with either IOMeter or sqlio.exe, both of which all you to specify multiple test files.
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2007-11-02 : 12:32:21
I think it is useful to have a standard test setup that you use to run SQLIO. I created a command file, TEST_DISKS.BAT, that I use to run a series of 16 tests for sequential/random, read/write for different I/O sizes.

I usually create a param file with a test file size of at least 500 MB. I create the param file with 4 threads, depending on the number of CPU cores on the server; you should have at least 1 per CPU. See sample file PARAM_D.TXT below.

The example command shown in bat file sends the test output to a file. I usually run a test of each disk on a server when I first set it up, and leave the output files in the C:\Program files\SQLIO directory. That way, if a problem develops later, I have the old benchmarks to compare current performance to.



Text of parameter file PARAM_D.TXT:
D:\sqlio_testfile.dat 4 0x0 500


Text of file TEST_DISKS.BAT:

@echo on
rem Sample run
rem c:
rem cd "Program files\SQLIO"

rem Example of how to run test for disk D
rem TEST_DISKS.BAT D >TEST_D_OUTPUT.TXT

@echo *** Start of Test for Disk %1% ***

@echo Short throwaway test to setup the workfile
sqlio -kR -s5 -fsequential -o8 -b8 -LS -Fparam_%1%.txt


@echo Sequential Read 8K
sqlio -kR -s10 -fsequential -o8 -b8 -LS -Fparam_%1%.txt

@echo Sequential Read 64K
sqlio -kR -s10 -fsequential -o8 -b64 -LS -Fparam_%1%.txt

@echo Sequential Read 128K
sqlio -kR -s10 -fsequential -o8 -b128 -LS -Fparam_%1%.txt

@echo Sequential Read 256K
sqlio -kR -s10 -fsequential -o8 -b256 -LS -Fparam_%1%.txt

@echo Sequential Write 8K
sqlio -kW -s10 -fsequential -o8 -b8 -LS -Fparam_%1%.txt

@echo Sequential Write 64K
sqlio -kW -s10 -fsequential -o8 -b64 -LS -Fparam_%1%.txt

@echo Sequential Write 128K
sqlio -kW -s10 -fsequential -o8 -b128 -LS -Fparam_%1%.txt

@echo Sequential Write 256K
sqlio -kW -s10 -fsequential -o8 -b256 -LS -Fparam_%1%.txt

@echo Random Read 8K
sqlio -kR -s10 -frandom -o8 -b8 -LS -Fparam_%1%.txt

@echo Random Read 64K
sqlio -kR -s10 -frandom -o8 -b64 -LS -Fparam_%1%.txt

@echo Random Read 128K
sqlio -kR -s10 -frandom -o8 -b128 -LS -Fparam_%1%.txt

@echo Random Read 256K
sqlio -kR -s10 -frandom -o8 -b256 -LS -Fparam_%1%.txt

@echo Random Write 8K
sqlio -kW -s10 -frandom -o8 -b8 -LS -Fparam_%1%.txt

@echo Random Write 64K
sqlio -kW -s10 -frandom -o8 -b64 -LS -Fparam_%1%.txt

@echo Random Write 128K
sqlio -kW -s10 -frandom -o8 -b128 -LS -Fparam_%1%.txt

@echo Random Write 256K
sqlio -kW -s10 -frandom -o8 -b256 -LS -Fparam_%1%.txt

@echo End of Test for Disk %1%

@echo Delete workfile %1%:\sqlio_testfile.dat

del %1%:\sqlio_testfile.dat /f /q

@echo *** Test Done ***






CODO ERGO SUM
Go to Top of Page

powell
Starting Member

3 Posts

Posted - 2007-11-02 : 13:28:41
<<If everything else remains the same, having multiple files would not change the disk queue length.>>

Splitting the files should give SQL Server the go ahead to perform parallel operations on those files, but your saying that it doesnt matter because it will still slow down waiting for the disk anyways.
Go to Top of Page

Haywood
Posting Yak Master

221 Posts

Posted - 2007-11-02 : 14:01:07
It's defenitly interesting to note the performance drop when you get to the 8GB filesize. I'm glad I use 12GB file sizes as a standard of measure, but the fact that it drops off so dramatically points out that I (we) need to do more thourough performance testing up front to find out what our own systems performance dropoff points are.
Go to Top of Page

lshea
Starting Member

5 Posts

Posted - 2007-11-03 : 00:41:34
quote:
Originally posted by powell
Splitting the files should give SQL Server the go ahead to perform parallel operations on those files, but your saying that it doesnt matter because it will still slow down waiting for the disk anyways.



Whether or not SQL Server benefits from multiple files is a different issue. For instance, having multiple files for tempdb may prove to be good for performance. But that has everything to do with SQL Server internal contention on tempdb system structures, and has little to do with disk I/O performance.

Linchi
Go to Top of Page

powell
Starting Member

3 Posts

Posted - 2007-11-04 : 11:18:36
quote:

Whether or not SQL Server benefits from multiple files is a different issue. For instance, having multiple files for tempdb may prove to be good for performance. But that has everything to do with SQL Server internal contention on tempdb system structures, and has little to do with disk I/O performance.

Linchi



Ok, I guess what Im trying to ascertain is whether I stand to gain any performance benefit whatsoever is splitting to multiple files with all other things remaining the same. I know this has no affect on IO, but my theory was that, lets say I have four huge tables in the database, so I go ahead and separate the database into four files. Now when SQL Server needs to execute a query to access only one of those tables, it has a smaller file to look through, so the performance might be better. In this case, the issue trying to resolve is high disk queue values for read operations on a huge file. Write performance does not seem to be the issue.

Cheers,

Bill
Go to Top of Page

lshea
Starting Member

5 Posts

Posted - 2007-11-07 : 11:03:08
quote:
Originally posted by Michael Valentine Jones

I think it is useful to have a standard test setup that you use to run SQLIO. I created a command file, TEST_DISKS.BAT, that I use to run a series of 16 tests for sequential/random, read/write for different I/O sizes.

...
CODO ERGO SUM



I use sqlio.exe in similar ways. To save you time in picking out the I/O metrics from your log files, you may want to take a look at this little script:
http://sqlblog.com/blogs/linchi_shea/archive/2007/02/21/parse-the-sqlio-exe-output.aspx

Linchi
Go to Top of Page

ynakache
Starting Member

1 Post

Posted - 2007-11-27 : 04:54:20
hi all

wrote a C# win form that execute the sqlio.exe and uses RedirectStandardOutput

to parse the output and it build a datagrid view with the results and setting, you can later to save the results to Excel or CSV file.

download from : http://www.usaupload.net/d/4qmkm6bg5k4

comment,tips,bugs

send to nakache AT gmail dot com
Go to Top of Page
   

- Advertisement -