블로그 이미지
LifeisSimple

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Notice

2011. 12. 27. 23:45 Brain Trainning/DataBase

출처 : http://www.mssqltips.com/sqlservertip/2127/benchmarking-sql-server-io-with-sqlio/

Problem

During a recent SQL Server install everything seemed to go fine. SQL Server 2008 R2 went in with no problems. There was a local drive with a RAID 1 set for the OS and system databases and two direct attached RAID 5 sets; one for transaction log files and one for data files. Was everything okay? Could it be better? How could I tell? In this tip we look at how to maximize your IO throughput.

Solution

There are many factors that go to make up the performance of SQL Server: CPU, RAM, disk I/O performance and, most importantly, the T-SQL code and queries. The Control Panel Device Manager tells you how fast the CPU's are, how much memory is available and the memory speed. The performance of queries is very important, but that's the subject of numerous other articles. It's easy to tell the size of the disk, but not so easy to understand how many I/O operations the disks are capable of achieving. The variety of technologies, such as controllers, HBA's, SAN's, virtualization and iSCSI, that can be employed between the Windows operating system (OS) and the physical drives is numerous and clouds any analysis based on the hardware specs.

While disk drives are often purchased to provide the right amount of space, the I/O capacity of the drives is often neglected. This leaves SQL Server in the unfortunate position of needing more I/O operations than the disks can provide. The least that we can do is get the most out of the drives that we have and understand their capacity. That's done with benchmarking.

My favorite tool for benchmarking is the free tool SQLIO from Microsoft, which can be downloaded from Microsoft downloads here. There are alternatives, such as IOMeter. IOMeter was originally written by Intel, but is now open source and can be downloaded from www.IOMeter.org. It has the advantage that there are versions for Linux, Solaris and even Netware. Since SQL Server runs on Windows all we need is SQLIO, and becoming familiar with a single tool has a lot of advantages. There are others from Microsoft such as SQLIOSim and SQLIOStress that are tools for validating the robustness rather then the performance of disk systems.

After running setup, find the install directory which is usually "c:\program files\SQLIO" on 32 bit systems or "c:\Progam Files (x86)\SQLIO" on 64 bit systems. For Intel Windows, SQLIO is a 32 bit program. There's an advanced version for Alpha that's a 64 bit program.

Configure Param.txt

The first step is to modify the file param.txt to tell SQLIO where to find its test file which is named testfile.dat. By default the file will be in the same directory as the program file, but you probably don't need to benchmark your C: drive. Instead I'm going to benchmark the L: drive, one of the direct attached RAID 5 sets.

Param.txt has 4 parameters for each line.

ParameterDescriptionValues
file name Name of the test file L:\testfile.dat
number of threads Size of the thread pool. This will be overridden by the command line later. 8
mask Affinity mask to bind operations against this file to particular CPU's. I don't use this and just use 0x0 for all CPU's 0x0
file size The size in MB. This should be roughly the size of your database and always larger than your cache, unless you want to test just the cache. I usually use a 100 GB file, but 20 GB (20480 MB) might be okay if your short on space. 102400

I generally test only one drive at a time, so I only put one line into this file. You can put several lines each for its own test file. Lines with # are treated as comments. My typical param.txt file for testing the L drive would be:

L:\testfile.dat 8 0x0 102400

Create Testfile

The next step is to run SQLIO once, so that it can create the testfile.dat. This takes a long time to run and I'd rather keep this run separate from any test runs.

sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt

You'll see output like this:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
 sqlio v1.5.SG
 parameter file used: param.txt
 file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
 2 threads writing for 5 secs to file l:\testfile.dat
 using 64KB sequential IOs
 enabling multiple I/Os per thread with 4 outstanding
 size of file l:\testfile.dat needs to be: 107374182400 bytes
 current file size: 0 bytes
 need to expand by: 107374182400 bytes
 expanding l:\testfile.dat ...

Expansion on my system took 15 minutes, so be patient while waiting for this to complete.

Let's run down the most important command line switches that we'll be using.

SwitchDescriptionExample
-d The drive or drives to work on. There should already be a testfile.dat on that drive as specified in the param.txt. There can be multiple drive letters, such as in "-dEFG". I test only one drive at a time. -DL
-B Controls buffering. N for no buffering, H for only hardware controller buffering, S for Windows software buffering, or Y for both hardware and software buffering. To match what SQL Serve is doing, use -BH for just hardware buffering. -BH
-k Operation to perform. W for write, R for read -kW
-f Random or sequential I/O. -frandom chooses the block randomly across the file. -fsequential reads or writes sequentially. There are additional choices for testing specific block sizes -frandom
-t Threads to run simultaneously. -t2
-o Outstanding requests on each thread. SQL Server threads issue read requests and then wait, so their typical depths is 1. However, the Lazy Writer issues many write requests. So a switch of "-o8" would always keep 8 I/O operations queued on each thread. You'll have to work on balancing the number of threads and the number of outstanding requests on each thread in order to keep the disk sub system busy. I usually increase this number until the disks become saturated. -o8
-s Seconds to run. This should be long enough to fill any cache and then build the queue to it's maximum before processing levels into a steady state. 90 seconds is usually sufficient to get a good picture of what the disks are capable of 90
-b Block size of each I/O in kilobytes. SQL Server reads and writes to data files in 64K blocks. -b64

There are additional switches available and detailed description of each switch is included in the readme.txt file that serves as the documentation for SQLIO.

Here's the complete output from the initial run of SQLIO used to create the test file:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 5 secs to file l:\testfile.dat
using 64KB sequential IOs
enabling multiple I/Os per thread with 4 outstanding
size of file l:\testfile.dat needs to be: 107374182400 bytes
current file size: 0 bytes
need to expand by: 107374182400 bytes
expanding l:\testfile.dat ... done.
using specified size: 102400 MB for file: l:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 2725.80
MBs/sec: 170.3

At the bottom we see the two most important metrics for analyzing the drives: Input/Output operations per second (IOs/sec) and Megabytes per second (MBs/Sec). These two numbers are related by the block size used for the test. The L drive in my tests is a RAID 5 set of four 7200 RPM SATA drives. The allocation unit size is 4096 and the partition offset is 32256. In the article Partition offset and allocation unit size of a disk for SQL Server I described how to determine these numbers and how these can be changed.

Testing

Now to test the L: drive I create a series of tests in a command file (.cmd) and let them run. The number of I/O request that are outstanding at any one time is the product of two switches: -t for threads and -o for outstanding requests. The tests start with one outstanding request per thread and increase the number of threads used until there is one per core in the server and then increases the number of outstanding requests per thread after that. Each line doubles the number of outstanding requests from the previous line. What I'm looking for is the maximum number of I/Os that the L: drive can handle.

Random Writes Test

sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat

Here's the output from the run:

C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
1 thread writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 110.06
MBs/sec: 6.87
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
2 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.24
MBs/sec: 6.70
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
4 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 112.67
MBs/sec: 7.04
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
8 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.36
MBs/sec: 6.71

I've extracted the IOs per second and megabytes per second numbers into the table that follows. It looks like one outstanding request is all that it takes for the drive to reach it's maximum capacity for random writes, because all tests after the first have very similar performance. Random writes are used by the Lazy Writer and by Checkpoints to save pages to disk. The L: drive can perform roughly 110 of these each second. Since the block size switch (-b) was set to 64K that results in around 7 megabytes per second.

Outstanding I/O sIOs per SecondMB per second
1 110.06 6.87
2 107.24 6.70
4 112.67 7.04
8 107.36 6.71

Random Reads Test

The next set of tests is for random reads. SQL Server does random reads when doing bookmark lookups or when reading from fragmented tables. Here are the tests that I ran:

sqlio -dL -BH -kR -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o128 -s90 -b64 testfile.dat

Here are the results extracted to a table:

Outstanding I/O sIOs per SecondMB per second
1 100.83 6.30
2 149.90 9.36
4 204.07 12.75
8 261.24 16.32
16 315.76 19.73
32 366.78 22.92
64 420.82 26.30
128 453.46 28.34
256 486.76 30.42
512 488.14 30.50
1024 488.68 20.54

The L: drive plateaus at around 488 I/Os per second, which results in 30.50 MB/sec in throughput. That's an okay number, but we must remember that when there are a large number of bookmark lookups, that's all the physical I/O's that the L: drive can deliver. If the table is very large the chances of a page being cached is small and it may take one physical read for each lookup no matter the available memory or cache.

Sequential Reads Test

Performance is best when SQL Server is reading sequentially rather than randomly. This is achieved when SQL Server is able to do seeks or scans on indexes or heaps that are not fragmented. To test sequential reads I'll run these tests:

sqlio -dL -BH -kR -fsequential -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o128 -s90 -b64 testfile.dat

Here are the results.

Outstanding I/O sIOs per SecondMB per second
1 3646 227
2 5497 343
4 5469 341
8 5550 346
16 5503 343
32 5577 348
64 5574 348
128 5637 352
256 5611 350
512 5673 354
1024 5629 351

Sequential reads are faster than random reads, because the disk heads often don't have to move to satisfy the request. On the L: drive the number of reads climbs to around 5600 and plateaus there.

Sequential Writes Test

There are a variety of times when SQL Server is able to write sequentially. One of them is when it writes to the transaction log file. If there is only one log file on the drive, all writes are to the end of the file and the disk heads don't have to move. I've changed the parameters to use 8K blocks (-b8) to be closer to how log files are used in SQL Server.

sqlio -dL -BH -kW -fsequential -t1 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t2 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t4 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o2 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o4 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o8 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o16 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o32 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o64 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o128 -s90 -b8 testfile.dat

Here are the results:

Outstanding I/O sIOs per SecondMB per second
1 11,510 90
2 14,888 116
4 15,030 117
8 14,911 116
16 15,006 117
32 14,931 116
64 15,062 117

Each of these I/O's are 8K, smaller than the 64K I/O's used in the tests further above and the number of operations rises rapidly to 15,000 and plateaus with 117 megabytes per second.

Conclusion

One of the keys to achieving this performance is that the SQLIO test is the only activity on the drive while the test is being run. Similar results will pertain to SQL Server, so long as the drive isn't shared with other activity. The numbers in these tests can be compared to tests run with other ways of formatting the drive, different RAID settings and to other drives to get an idea of the expected performance to help optimize the drive. I also want to keep these results on hand to re-run if there is a suspicion that there is something wrong with a drive at a later date. The tests can be re-run and I can quickly verify that there is or is not a change in performance.

Next Steps

posted by LifeisSimple