블로그 이미지
LifeisSimple

calendar

1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31

Notice

2011. 12. 27. 23:45 Brain Trainning/DataBase

출처 : http://www.mssqltips.com/sqlservertip/2127/benchmarking-sql-server-io-with-sqlio/

Problem

During a recent SQL Server install everything seemed to go fine. SQL Server 2008 R2 went in with no problems. There was a local drive with a RAID 1 set for the OS and system databases and two direct attached RAID 5 sets; one for transaction log files and one for data files. Was everything okay? Could it be better? How could I tell? In this tip we look at how to maximize your IO throughput.

Solution

There are many factors that go to make up the performance of SQL Server: CPU, RAM, disk I/O performance and, most importantly, the T-SQL code and queries. The Control Panel Device Manager tells you how fast the CPU's are, how much memory is available and the memory speed. The performance of queries is very important, but that's the subject of numerous other articles. It's easy to tell the size of the disk, but not so easy to understand how many I/O operations the disks are capable of achieving. The variety of technologies, such as controllers, HBA's, SAN's, virtualization and iSCSI, that can be employed between the Windows operating system (OS) and the physical drives is numerous and clouds any analysis based on the hardware specs.

While disk drives are often purchased to provide the right amount of space, the I/O capacity of the drives is often neglected. This leaves SQL Server in the unfortunate position of needing more I/O operations than the disks can provide. The least that we can do is get the most out of the drives that we have and understand their capacity. That's done with benchmarking.

My favorite tool for benchmarking is the free tool SQLIO from Microsoft, which can be downloaded from Microsoft downloads here. There are alternatives, such as IOMeter. IOMeter was originally written by Intel, but is now open source and can be downloaded from www.IOMeter.org. It has the advantage that there are versions for Linux, Solaris and even Netware. Since SQL Server runs on Windows all we need is SQLIO, and becoming familiar with a single tool has a lot of advantages. There are others from Microsoft such as SQLIOSim and SQLIOStress that are tools for validating the robustness rather then the performance of disk systems.

After running setup, find the install directory which is usually "c:\program files\SQLIO" on 32 bit systems or "c:\Progam Files (x86)\SQLIO" on 64 bit systems. For Intel Windows, SQLIO is a 32 bit program. There's an advanced version for Alpha that's a 64 bit program.

Configure Param.txt

The first step is to modify the file param.txt to tell SQLIO where to find its test file which is named testfile.dat. By default the file will be in the same directory as the program file, but you probably don't need to benchmark your C: drive. Instead I'm going to benchmark the L: drive, one of the direct attached RAID 5 sets.

Param.txt has 4 parameters for each line.

ParameterDescriptionValues
file name Name of the test file L:\testfile.dat
number of threads Size of the thread pool. This will be overridden by the command line later. 8
mask Affinity mask to bind operations against this file to particular CPU's. I don't use this and just use 0x0 for all CPU's 0x0
file size The size in MB. This should be roughly the size of your database and always larger than your cache, unless you want to test just the cache. I usually use a 100 GB file, but 20 GB (20480 MB) might be okay if your short on space. 102400

I generally test only one drive at a time, so I only put one line into this file. You can put several lines each for its own test file. Lines with # are treated as comments. My typical param.txt file for testing the L drive would be:

L:\testfile.dat 8 0x0 102400

Create Testfile

The next step is to run SQLIO once, so that it can create the testfile.dat. This takes a long time to run and I'd rather keep this run separate from any test runs.

sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt

You'll see output like this:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
 sqlio v1.5.SG
 parameter file used: param.txt
 file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
 2 threads writing for 5 secs to file l:\testfile.dat
 using 64KB sequential IOs
 enabling multiple I/Os per thread with 4 outstanding
 size of file l:\testfile.dat needs to be: 107374182400 bytes
 current file size: 0 bytes
 need to expand by: 107374182400 bytes
 expanding l:\testfile.dat ...

Expansion on my system took 15 minutes, so be patient while waiting for this to complete.

Let's run down the most important command line switches that we'll be using.

SwitchDescriptionExample
-d The drive or drives to work on. There should already be a testfile.dat on that drive as specified in the param.txt. There can be multiple drive letters, such as in "-dEFG". I test only one drive at a time. -DL
-B Controls buffering. N for no buffering, H for only hardware controller buffering, S for Windows software buffering, or Y for both hardware and software buffering. To match what SQL Serve is doing, use -BH for just hardware buffering. -BH
-k Operation to perform. W for write, R for read -kW
-f Random or sequential I/O. -frandom chooses the block randomly across the file. -fsequential reads or writes sequentially. There are additional choices for testing specific block sizes -frandom
-t Threads to run simultaneously. -t2
-o Outstanding requests on each thread. SQL Server threads issue read requests and then wait, so their typical depths is 1. However, the Lazy Writer issues many write requests. So a switch of "-o8" would always keep 8 I/O operations queued on each thread. You'll have to work on balancing the number of threads and the number of outstanding requests on each thread in order to keep the disk sub system busy. I usually increase this number until the disks become saturated. -o8
-s Seconds to run. This should be long enough to fill any cache and then build the queue to it's maximum before processing levels into a steady state. 90 seconds is usually sufficient to get a good picture of what the disks are capable of 90
-b Block size of each I/O in kilobytes. SQL Server reads and writes to data files in 64K blocks. -b64

There are additional switches available and detailed description of each switch is included in the readme.txt file that serves as the documentation for SQLIO.

Here's the complete output from the initial run of SQLIO used to create the test file:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 5 secs to file l:\testfile.dat
using 64KB sequential IOs
enabling multiple I/Os per thread with 4 outstanding
size of file l:\testfile.dat needs to be: 107374182400 bytes
current file size: 0 bytes
need to expand by: 107374182400 bytes
expanding l:\testfile.dat ... done.
using specified size: 102400 MB for file: l:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 2725.80
MBs/sec: 170.3

At the bottom we see the two most important metrics for analyzing the drives: Input/Output operations per second (IOs/sec) and Megabytes per second (MBs/Sec). These two numbers are related by the block size used for the test. The L drive in my tests is a RAID 5 set of four 7200 RPM SATA drives. The allocation unit size is 4096 and the partition offset is 32256. In the article Partition offset and allocation unit size of a disk for SQL Server I described how to determine these numbers and how these can be changed.

Testing

Now to test the L: drive I create a series of tests in a command file (.cmd) and let them run. The number of I/O request that are outstanding at any one time is the product of two switches: -t for threads and -o for outstanding requests. The tests start with one outstanding request per thread and increase the number of threads used until there is one per core in the server and then increases the number of outstanding requests per thread after that. Each line doubles the number of outstanding requests from the previous line. What I'm looking for is the maximum number of I/Os that the L: drive can handle.

Random Writes Test

sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat

Here's the output from the run:

C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
1 thread writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 110.06
MBs/sec: 6.87
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
2 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.24
MBs/sec: 6.70
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
4 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 112.67
MBs/sec: 7.04
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
8 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.36
MBs/sec: 6.71

I've extracted the IOs per second and megabytes per second numbers into the table that follows. It looks like one outstanding request is all that it takes for the drive to reach it's maximum capacity for random writes, because all tests after the first have very similar performance. Random writes are used by the Lazy Writer and by Checkpoints to save pages to disk. The L: drive can perform roughly 110 of these each second. Since the block size switch (-b) was set to 64K that results in around 7 megabytes per second.

Outstanding I/O sIOs per SecondMB per second
1 110.06 6.87
2 107.24 6.70
4 112.67 7.04
8 107.36 6.71

Random Reads Test

The next set of tests is for random reads. SQL Server does random reads when doing bookmark lookups or when reading from fragmented tables. Here are the tests that I ran:

sqlio -dL -BH -kR -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o128 -s90 -b64 testfile.dat

Here are the results extracted to a table:

Outstanding I/O sIOs per SecondMB per second
1 100.83 6.30
2 149.90 9.36
4 204.07 12.75
8 261.24 16.32
16 315.76 19.73
32 366.78 22.92
64 420.82 26.30
128 453.46 28.34
256 486.76 30.42
512 488.14 30.50
1024 488.68 20.54

The L: drive plateaus at around 488 I/Os per second, which results in 30.50 MB/sec in throughput. That's an okay number, but we must remember that when there are a large number of bookmark lookups, that's all the physical I/O's that the L: drive can deliver. If the table is very large the chances of a page being cached is small and it may take one physical read for each lookup no matter the available memory or cache.

Sequential Reads Test

Performance is best when SQL Server is reading sequentially rather than randomly. This is achieved when SQL Server is able to do seeks or scans on indexes or heaps that are not fragmented. To test sequential reads I'll run these tests:

sqlio -dL -BH -kR -fsequential -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o128 -s90 -b64 testfile.dat

Here are the results.

Outstanding I/O sIOs per SecondMB per second
1 3646 227
2 5497 343
4 5469 341
8 5550 346
16 5503 343
32 5577 348
64 5574 348
128 5637 352
256 5611 350
512 5673 354
1024 5629 351

Sequential reads are faster than random reads, because the disk heads often don't have to move to satisfy the request. On the L: drive the number of reads climbs to around 5600 and plateaus there.

Sequential Writes Test

There are a variety of times when SQL Server is able to write sequentially. One of them is when it writes to the transaction log file. If there is only one log file on the drive, all writes are to the end of the file and the disk heads don't have to move. I've changed the parameters to use 8K blocks (-b8) to be closer to how log files are used in SQL Server.

sqlio -dL -BH -kW -fsequential -t1 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t2 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t4 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o2 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o4 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o8 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o16 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o32 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o64 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o128 -s90 -b8 testfile.dat

Here are the results:

Outstanding I/O sIOs per SecondMB per second
1 11,510 90
2 14,888 116
4 15,030 117
8 14,911 116
16 15,006 117
32 14,931 116
64 15,062 117

Each of these I/O's are 8K, smaller than the 64K I/O's used in the tests further above and the number of operations rises rapidly to 15,000 and plateaus with 117 megabytes per second.

Conclusion

One of the keys to achieving this performance is that the SQLIO test is the only activity on the drive while the test is being run. Similar results will pertain to SQL Server, so long as the drive isn't shared with other activity. The numbers in these tests can be compared to tests run with other ways of formatting the drive, different RAID settings and to other drives to get an idea of the expected performance to help optimize the drive. I also want to keep these results on hand to re-run if there is a suspicion that there is something wrong with a drive at a later date. The tests can be re-run and I can quickly verify that there is or is not a change in performance.

Next Steps

posted by LifeisSimple
2011. 3. 25. 16:51 Brain Trainning/Storage
출처 : http://social.technet.microsoft.com/wiki/contents/articles/sqlio-disk-subsystem-benchmark-tool-troubleshooting-hangs-and-errors.aspx

SQLIO Disk Subsystem Benchmark Tool: Troubleshooting Hangs and Errors

The SQLIO Disk Subsystem Benchmark Tool is a tool provided by Microsoft (SQLIO from Microsoft Download), which can be used to determine the I/O capacity of a given configuration. However, it comes with the disclaimer that SQLIO is provided ‘as is’ and there is no support offered for any problems encountered when using the tool. Please refer to the EULA.doc for the license agreement prior to using this tool.

Tips on how to prevent or resolve issues when running the SQLIO tool, especially on Windows 7 and Windows Server 2008 R2:

  1. Ensure that you have created an exception for SQLIO in Data Execution Prevention (DEP). In Windows 7 or Windows Server 2008 R2, you can create exceptions in DEP by opening System Properties (sysdm.cpl) and then clicking Advanced. Then, in Performance, click Settings. Click Data Execution Prevention. IfTurn on DEP for all programs and services except those I select is selected, click Add, then type the full path to SQLIO.EXE, including SQLIO.EXE. For example, C:\Program Files (x86)\SQLIO\SQLIO.EXE. If the other option, Turn on DEP for essential Windows programs and services only is selected, you do not need create an exception .

  • If your SQLIO commands contain -fsequential and SQLIO is crashing, ensure you define a params.txt file that sets the test file size before testing.
  • Ensure that all programs that are operating or could operate on the target drive for testing are off or suspended during the test. Two common examples include:
    1. Ensure that you configure exceptions in any virus scanning software that you have running or that could run during your test for both SQLIO.EXE and the target volume of the test.
    2. Ensure that you disable any drive indexing software. 

 To learn more about SQLIO, see the following related references:

posted by LifeisSimple
2011. 3. 23. 13:50 Brain Trainning/Storage

출처 : http://www.sqlteam.com/article/benchmarking-disk-io-performance-size-matters

Benchmarking Disk I/O Performance: Size Matters!

By Linchi Shea on 30 October 2007 | 9 Comments | Tags: Performance TuningDisk Tuning


This article discusses the impact of test file size when benchmarking SQL Server disk performance. (Editors Note: This is our first article by Linchi Shea.  Linchi is a SQL Server MVP and an expert on SQL Server hard drive performance.  The article addresses the impact of test file size when using SQLIO.EXE to test hard drive performance and how an incorrect file size can give misleading results.)

If you use a tool such as IOMeter or sqlio.exe--both of which are freely downloadable--to benchmark the I/O performance of a disk, you typically would configure the tool to generate I/Os against a test file that is placed on the disk.

For instance, with sqlio.exe the test file is specified on the command line as follows:

sqlIO -kR -t1 -s30 -dE -o32 -frandom -b8 -BH -LS testfile.dat

In this case, you are benchmarking 8K random reads against the testfile.dat file on the E drive. The sqlio.exe utility will issue the 8K read requests from a single thread and will maintain an I/O queue depth of 32.

The question is: Does the size of the test file matter? If so, what size should you pick?

Benchmark results may lie

I was interested in comparing the 8K random reads performance between the E drive and the F drive on one of my machines. So, I first ran the above command line, and that gave me the following results:

E drive:

IOs/sec: 2,681
MBs/sec: 21
Min_Latency (ms): 0
Avg_Latency (ms): 11
Max_Latency (ms): 310

I then changed the -d parameter value from E to F, and re-ran the command line, and obtained the following results:

F drive:

IOs/sec: 19,871
MBs/sec: 155
Min_Latency (ms): 0
Avg_Latency (ms): 1
Max_Latency (ms): 16

From these results, it is more than obvious that the F drive was far more performant than the E drive, at least for 8K random reads.

Unfortunately, that conclusion would be terribly far from the truth!

"Wait a minute and stop right there! Are you telling me that a drive that can do ~20,000 reads per second is not significantly better than a drive that does only 2600 reads a second?"

Well, that's not what I'm telling you. What I failed to disclose in the little comparison tests described above was that the test file on the F drive was 256MB in size, whereas the test file on the E drive was 128GB in size. It turns out that the size of the test file has such a dramatic impact on the resulting I/O metrics that I was literally comparing apples and oranges.

Running the same sqlio.exe command line against a 256MB test file on the E drive produced to the following results:

IOs/sec: 16,635
MBs/sec: 130
Min_Latency (ms): 0
Avg_Latency (ms): 1
Max_Latency (ms): 29

The F drive still performed better than did the E drive. But instead eight times better, the F drive was about 20% better in I/Os per second.

The impact of test file size

To drive home the fact that the size of the test file can have an overwhelming impact on the disk I/O benchmark results, I ran the following identical command line repeatedly with the testfile.dat set to different sizes:

sqlIO -kR -t1 -s30 -dE -o32 -frandom -b8 -BH -LS testfile.dat

The sizes of testfile.dat I tested included the following:

  • 256MB
  • 512MB
  • 2GB
  • 8GB
  • 32GB
  • 64GB
  • 128GB

The chart below illustrates the impact on I/Os per second when the size of the test file was the only parameter changed:

For different disk drives, the shape of the curve and the inflexion point will differ. But there is no mistake that it is meaningless to state a disk I/O metric without also disclosing the size of the test file.

Why size matters?

Among many factors that may help render the disk benchmark results sensitive to test file size, the following two are worth noting:

  • Disk seek area
  • Cache size

Before explaining why these two factors are significant, let me note that when you configure sqlio.exe to do random reads (or writes) there is no data locality. In other words, the I/O requests will not concentrate on a subset of the data, but will be distributed over the entire file.

Disk seek area. A larger file generally occupies a larger area on the disk platter than does a smaller file. For random access, this means that the disk arm has to cover more tracks, leading to a higher seek time on average. Since seek time is often the most significant time-consuming component in processing an I/O request, average I/O performance is therefore expected to drop when you increase the test file size.

Cache size. If an I/O request is satisfied with data already in the cache of a disk subsystem, seek time is no longer a factor. In addition, the more I/O requests that are satisfied only from the cache, the higher the average I/O performance. And the smaller the test file relative to the cache, the more likely the requested data will be found in the cache. It therefore follows that the smaller the test file the higher the average I/O performance.

What size to use?

This is a frequently asked question, but unfortunately there is no one-size-fit-all answer. If you read the literature on this topic, you will no doubt find that the most common recommendation is to use a test file that is larger than the cache size of the disk subsystem being benchmarked.

This is generally a good recommendation, but not an absolute truth. It is generally good because as a SQL Server user, you benchmark a disk to make better use of SQL Server for your application databases, and your application databases in general rarely fit entirely into the cache of your disk subsystem.

Then, why would you ever want to use a small test file? The most obvious reason is when you are dealing with databases that are small enough to be cached, and you want to get disk benchmark numbers that are relevant to these databases. There are also specific benchmarking scenarios in which a small test file is a must. Consider the case where you want to measure the read latency between the host OS and the I/O subsystem cache. You should configure your tests to eliminate the influence of the disk media access time, and a test file that is small enough to fit into the cache will do the trick.

posted by LifeisSimple
2011. 3. 23. 11:45 Brain Trainning/Storage
출처 : http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/05/25/parsing-sqlio-output-to-excel-charts-using-regex-in-powershell.aspx

Parsing SQLIO Output to Excel Charts using Regex in PowerShell

Today Joe Webb (Blog|Twitter) blogged about The Power of Regex in Powershell, and in his post he shows how to parse the SQL Server Error Log for events of interest.  At the end of his blog post Joe asked about other places where Regular Expressions have been useful in PowerShell so I thought I’d blog my script for parsing SQLIO output using Regex in PowerShell, to populate an Excel worksheet and build charts based on the results automatically.

If you’ve never used SQLIO, Brent Ozar (Blog|Twitter) has a article on SQLServerPedia titled SAN Performance Tuning with SQLIO that includes a lot of information as well as a video tutorial on its use.  I have been using SQLIO quite a bit this year, mainly to gather performance information for systems before and after reconfiguring their storage to show percent impact information to the business for making changes in storage like migrating from RAID 5 to 10, and aligning disk partitions used for database and log file storage.  I have also used it in benchmarking new SAN configurations and the SAN configuration of our Virtual Machine environment to show team members why I have recommended that a number of our physical servers be migrated onto Virtual Machines.

I generally use a param.txt file similar to:

G:\MSSQL\testfile1.dat 4 0x0 32768

which creates a 32GB test file and then have a DOS batch file that runs the actual SQLIO tests using a script like:

sqlio -kW -t16 -s10 -o8 -fsequential -b8 -BH -LS -Fparam.txt

sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS G:\MSSQL\testfile1.dat > post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

sqlio -kR -t8 -s360 -o8 -frandom -b8 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

sqlio -kW -t8 -s360 -o8 -fsequential -b8 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

sqlio -kR -t8 -s360 -o8 -fsequential -b8 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

Which runs the gamit of tests using 8, 32, 64, 128, and 256K block sizes, with 16 threads, and 8 pending IO.  You can certainly expand the test set much further than this and do a very exhaustive testing regimen, I am by no means saying that my tests above are the best fit, but they have sufficiently met my needs for the tuning I have been doing.  Once the tests have been run the post.txt file will contain an output entry similar the following for every test that was run.

sqlio v1.5.SG 
using system counter for latency timings, 14318180 counts per second 
16 threads writing for 60 secs to file d:\MSSQL\testfile1.dat 
    using 8KB random IOs 
    enabling multiple I/Os per thread with 8 outstanding 
    buffering set to use hardware disk cache (but not file cache) 
using current size: 2048 MB for file: d:\MSSQL\testfile1.dat 
initialization done 
CUMULATIVE DATA: 
throughput metrics: 
IOs/sec: 18527.91 
MBs/sec:   144.74 
latency metrics: 
Min_Latency(ms): 0 
Avg_Latency(ms): 6 
Max_Latency(ms): 4900 
histogram: 
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ 
%: 78  6  8  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  2 

Linchi Shea (Blog) wrote a blog post three years ago that showed how to Parse the sqlio.exe Output using Perl.  I found this in my search for a smarter way of parsing these long output files I was generating other than manually, but I don’t have Perl installed anywhere and I really felt that I should be able to do this natively using the tools readily available to me on my Windows 7 laptop, so I looked to PowerShell for a solution. 

Reading the file data in is accomplished using Get-Content which returns a collection of strings.  At first I tried casting this to a [string] object directly, but soon found out that when I did this the resulting string was missing the carriage returns and line feeds that existed in the original file.  This posed a couple of problems for me in parsing the outputs initially, but doing a [string]::Join operation using [Environment]::NewLine as the concatenater allowed an appropriately CRLF delimited string to be returned. 

$filedata = [string]::Join([Environment]::NewLine,(Get-Content $FileName))

Now that I had my file data loaded into a the $filedate object, the next step was to split this string based on the “sqlio v1.5.SG” header that is output at the beginning of each test run.  Since PowerShell is .NET based, this is easily accomplished by using the Split() method of the System.String object which is System.Type for the $filedata object.

$Results = $filedata.Split( [String[]]"sqlio v1.5.SG", [StringSplitOptions]::RemoveEmptyEntries )

At first I started to write a foreach loop with this object, lets face it old habits die hard, but instead I made use of piping which allows a collection like the one returned by the Split() method to be worked on in a single operation.  Piping in PowerShell is similar to a manufacturing process where an object gets transformed and then passed on until the ultimate result is achieved.  Once an object is piped, it becomes accessible through the use of the $_ variable name, allowing the object to be transformed along the pipeline.  By piping the result of the Split() I was able to write a SELECT statement that utilized Regex to get the information of interest out of each test run.

     select @{Name="Threads"; Expression={[int]([regex]::Match($_, "(\d+)?\sthreads\s(reading|writing)").Groups[1].Value)}},` 
                @{Name="Operation"; Expression={switch ([regex]::Match($_, "(\d+)?\sthreads\s(reading|writing)").Groups[2].Value) 
                                                { 
                                                    "reading" {"Read"} 
                                                    "writing" {"Write"} 
                                                }    }},` 
                @{Name="Duration"; Expression={[int]([regex]::Match($_, "for\s(\d+)?\ssecs").Groups[1].Value)}},` 
                @{Name="IOSize"; Expression={[int]([regex]::Match($_, "\tusing\s(\d+)?KB\s(sequential|random)").Groups[1].Value)}},` 
                @{Name="IOType"; Expression={switch ([regex]::Match($_, "\tusing\s(\d+)?KB\s(sequential|random)").Groups[2].Value) 
                                                { 
                                                    "random" {"Random"} 
                                                    "sequential" {"Sequential"} 
                                                }  }},` 
                @{Name="PendingIO"; Expression={[int]([regex]::Match($_, "with\s(\d+)?\soutstanding").Groups[1].Value)}},` 
                @{Name="FileSize"; Expression={[int]([regex]::Match($_, "\s(\d+)?\sMB\sfor\sfile").Groups[1].Value)}},` 
                @{Name="IOPS"; Expression={[decimal]([regex]::Match($_, "IOs\/sec\:\s+(\d+\.\d+)?").Groups[1].Value)}},` 
                @{Name="MBs_Sec"; Expression={[decimal]([regex]::Match($_, "MBs\/sec\:\s+(\d+\.\d+)?").Groups[1].Value)}},` 
                @{Name="MinLat_ms"; Expression={[int]([regex]::Match($_, "Min.{0,}?\:\s(\d+)?").Groups[1].Value)}},` 
                @{Name="AvgLat_ms"; Expression={[int]([regex]::Match($_, "Avg.{0,}?\:\s(\d+)?").Groups[1].Value)}},` 
                @{Name="MaxLat_ms"; Expression={[int]([regex]::Match($_, "Max.{0,}?\:\s(\d+)?").Groups[1].Value)}}` 

The select returns a new type of object that has Properties named Threads, Operation, Duration, IOSize, IOType, PendingIO, FileSize, IOPS, MBs_sec, MinLat_ms, AvgLat_ms, and MaxLat_ms.  This alone is sufficient to proceed to creating an Excel file for the information, but I wanted the information sorted in the same manner every time I ran this, primarily because I change how I run my tests sometimes, and I like consistency in the output.  Since I am so anal retentive thorough, I fixed this problem by piping the output from the select to the Sort-Object commandlet and sorted by the IOSize, IOType, Operation, and Threads.

Getting the data into Excel was actually fairly simple to do, thanks to Buck Woody(Blog|Twitter) and SQL Rockstar Tom LaRock(Blog|Twitter).  Buck wrote a blog post titled Create an Excel Graph of your Big Tables – With Powershell, and Tom wrote a Simple Talk Article, Monitoring SQL Server Virtual Log File Fragmentation, that use the Interop Excel objects to create an Excel workbook from PowerShell, populate the workbook with data, and build charts using the data.  My code for the Excel integration is based entirely on their examples.

$Excel = New-Object -ComObject Excel.Application 
$Excel.Visible = $true 
$WorkBook = $Excel.WorkBooks.Add() 
$WorkBook.WorkSheets.Item(1).Name = "RawData" 
$WorkBook.WorkSheets.Item(3).Delete() 
$WorkBook.WorkSheets.Item(2).Delete() 
$WorkSheet = $WorkBook.WorkSheets.Item("RawData") 
$x = 2 
$WorkSheet.Cells.Item(1,1) = "Threads" 
$WorkSheet.Cells.Item(1,2) = "Operation" 
$WorkSheet.Cells.Item(1,3) = "Duration" 
$WorkSheet.Cells.Item(1,4) = "IOSize" 
$WorkSheet.Cells.Item(1,5) = "IOType" 
$WorkSheet.Cells.Item(1,6) = "PendingIO" 
$WorkSheet.Cells.Item(1,7) = "FileSize" 
$WorkSheet.Cells.Item(1,8) = "IOPS" 
$WorkSheet.Cells.Item(1,9) = "MBs/Sec" 
$WorkSheet.Cells.Item(1,10) = "Min_Lat(ms)" 
$WorkSheet.Cells.Item(1,11) = "Avg_Lat(ms)" 
$WorkSheet.Cells.Item(1,12) = "Max_Lat(ms)" 
$WorkSheet.Cells.Item(1,13) = "Caption"

$Results | % { 
    $WorkSheet.Cells.Item($x,1) = $_.Threads 
    $WorkSheet.Cells.Item($x,2) = $_.Operation 
    $WorkSheet.Cells.Item($x,3) = $_.Duration 
    $WorkSheet.Cells.Item($x,4) = $_.IOSize 
    $WorkSheet.Cells.Item($x,5) = $_.IOType 
    $WorkSheet.Cells.Item($x,6) = $_.PendingIO 
    $WorkSheet.Cells.Item($x,7) = $_.FileSize 
    $WorkSheet.Cells.Item($x,8) = $_.IOPS 
    $WorkSheet.Cells.Item($x,9) = $_.MBs_Sec 
    $WorkSheet.Cells.Item($x,10) = $_.MinLat_ms 
    $WorkSheet.Cells.Item($x,11) = $_.AvgLat_ms 
    $WorkSheet.Cells.Item($x,12) = $_.MaxLat_ms 
    $WorkSheet.Cells.Item($x,13) = [string]$_.IOSize + "KB " + [string]$_.IOType + " " + ` 
                                [string]$_.Operation + " " + [string]$_.Threads + ` 
                                " Threads " + [string]$_.PendingIO + " pending" 
    $x++}

$WorkBook.Charts.Add() | Out-Null 
$Chart = $WorkBook.ActiveChart 
$Chart.SetSourceData($WorkSheet.Range("H1:H$x")) 
$Chart.SeriesCollection(1).xValues = $WorkSheet.Range("M2:M$x") 
$Chart.SetSourceData($WorkSheet.Range("H1:H$x")) 
$Chart.SeriesCollection(1).xValues = $WorkSheet.Range("M2:M$x") 
$Chart.Name = "IOPS"

$WorkBook.Charts.Add() | Out-Null 
$WorkBook.ActiveChart.SetSourceData($WorkSheet.Range("I1:I$x")) 
$Chart = $WorkBook.ActiveChart 
$Chart.SeriesCollection(1).xValues = $WorkSheet.Range("M2:M$x") 
$Chart.Name = "MBs Sec"

I don’t know why, but the first Chart doesn’t refresh properly unless I call SetSourceData() and set the SeriesCollection xValues properties twice.  Since the code block that loads the data into the worksheet tracks the rows with the $x variable, the Range for the chart is always row 1 to $x.  The H column happens to be the column that holds the IOPS values and the I column holds the MBs Sec values in the workbook.  The only thing missing at this point a parameter declaration at the head of the script file to require an input parameter $FileName which is passed to Get-Content to read the information.

param(    [Parameter(Mandatory=$TRUE)] 
        [ValidateNotNullOrEmpty()] 
        [string] 
        $FileName)

The end result is a set of charts showing your IOPS and MBs Sec results from the test run.

image

image

The complete script is attached to this blog post.  If you use it let me know your thoughts.

Published Tuesday, May 25, 2010 10:12 PM by Jonathan Kehayias
Filed under: 

 Attachment(s): SQLIOResults.zip

posted by LifeisSimple
2011. 3. 21. 14:36 Brain Trainning/Storage
펌 : http://sqlserverpedia.com/wiki/SAN_Performance_Tuning_with_SQLIO

SAN Performance Tuning with SQLIO

SQLIO is a free utility from Microsoft that measures storage IO performance. The name "SQLIO" is horribly misleading, because it doesn't really have anything to do with SQL Server.

Contents

[hide]

SQLIO Video Tutorial

In this ten-minute video, Brent Ozar explains how to get started with SQLIO, how to create a batch file to test your SAN, and how to analyze the results.

For more video tutorials like this, check out the SQL Server Tutorials page.


Downloading and Configuring SQLIO

[Download SQLIO from Microsoft]

Notice that I said SQLIO, not SQLIOSIM. Experienced database administrators will often direct you to SQLIOSIM because it's easier to use and mimics SQL Server's disk activity patterns. Here's the catch: it won't necessarily test your SAN to its maximum potential. Your SAN team may indicate that if your SQLIOSIM results aren't fast enough, it's a SQL-related problem, not a SAN-related problem. They may use testing utilities from vendors that mimic results closer to what SQLIO will give you. We want to push the SAN's pedal to the metal and find out how fast it'll go in any situation.

After installing SQLIO, edit the param.txt file and change these two parameters:

  • First parameter - the physical location of the testing file. Change the drive letter to point to the SAN drive you want to test, like T:\testfile.dat.
  • Last parameter - the size of the testing file in megabytes. Increase this to 20480 or larger. Ideally, you want it to be larger than your SAN's cache, because your real databases will be larger than the SAN's cache.

After saving param.txt, run this at the command line in the same directory where SQLIO is installed in order to create the test file:

sqlio -kW -s10 -fsequential -o8 -b8 -LS -Fparam.txt timeout /T 10

When it finishes, your test file is created, and it's time to run our real tests.

Testing Your SAN Performance

Instead of picking and choosing individual parameters to use, I like to take the shotgun approach: try every possible combination of random versus sequential, low and high numbers of threads, read versus write, etc. The below batch file will take all of the possibilities and run 'em all. Copy/paste this into a text file called SANTest.bat:

sqlio -kW -t2 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t4 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t8 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t16 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t32 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t64 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t2 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t4 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t8 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t16 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t32 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t64 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t2 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t4 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t8 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t16 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t32 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t64 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t2 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t4 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t8 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t16 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t32 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t64 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

Whew! And that's just one pass - if you want to do multiple passes of the same tests for consistency's sake, like to eliminate the chance that other servers are running on the same SAN and affecting your performance results, you would want to paste that same set of 200+ lines multiple times into your batch file.

Let's take the first line of the batch file and analyze what it's doing:

sqlio -kW -t2 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat

The most important parameters are:

  • -kW means writes (as opposed to reads)
  • -t2 means two threads
  • -s120 means test for 120 seconds
  • -dM means drive letter M
  • -o1 means one outstanding request (not piling up requests)
  • -frandom means random access (as opposed to sequential)
  • -b64 means 64kb IOs

Do a find & replace in your text file and replace -dM with the drive letter of your choice. If you're testing on your S drive, for example, you would replace -dM with -dS.

Then go to the command prompt in the same directory as SQLIO is installed and type:

SANTEST.BAT > RESULTS.TXT

This will run our newly created batch file and dump the results into a text file. This will take a long time, like six hours or more, and it will be hammering your SAN. Don't run this on a production server, and don't even run it on the same SAN as a production server when the production server is under load because it may time out. I've had instances where this batch file has actually caused a SAN to restart itself, so use this with caution - preferably in a test lab or before your SAN goes live.

Hours later, when it finishes, you'll have a RESULTS.TXT file with lots of juicy metrics about your storage performance.

Importing SQLIO Results into SQL Server

Those text file results are cryptic - time to bring them into our favorite data analysis platform, Microsoft Access. Wait, I'm kidding, put the axe down - we'll import them into SQL Server.

Script to Create the Tables and and ETL Stored Procedure

Before we start, create a database that you'll use for SQLIO data storage or designate an existing utility database that you'll use. This script requires SQL Server 2005 or newer, since it uses the varchar(max) field.

In that database, run the below script to create the tables for results storage:

001.SET ANSI_NULLS ON
002.GO
003.SET QUOTED_IDENTIFIER ON
004.GO
005.CREATE TABLE [dbo].[SQLIO_Import](
006.[RowID] [intIDENTITY(1,1) NOT NULL,
007.[ParameterRowID] [intNULL,
008.[ResultText] [varchar](max) NULL,
009.CONSTRAINT [PK_SQLIO_Import] PRIMARY KEY CLUSTERED
010.(
011.[RowID] ASC
012.)
013.GO
014.CREATE TABLE [dbo].[SQLIO_TestPass](
015.[TestPassID] [intIDENTITY(1,1) NOT NULL,
016.[ServerName] [nvarchar](50) NOT NULL,
017.[DriveQty] [intNOT NULL,
018.[DriveRPM] [intNOT NULL,
019.[DriveRaidLevel] [nvarchar](10) NOT NULL,
020.[TestDate] [datetimeNOT NULL,
021.[SANmodel] [nvarchar](50) NOT NULL,
022.[SANfirmware] [nvarchar](50) NULL,
023.[PartitionOffset] [intNULL,
024.[Filesystem] [nvarchar](50) NULL,
025.[FSClusterSizeBytes] [intNULL,
026.[SQLIO_Version] [nvarchar](20) NULL,
027.[Threads] [intNULL,
028.[ReadOrWrite] [nchar](1) NULL,
029.[DurationSeconds] [intNULL,
030.[SectorSizeKB] [intNULL,
031.[IOpattern] [nvarchar](50) NULL,
032.[IOsOutstanding] [intNULL,
033.[Buffering] [nvarchar](50) NULL,
034.[FileSizeMB] [intNULL,
035.[IOs_Sec] [decimal](18, 0) NULL,
036.[MBs_Sec] [decimal](18, 0) NULL,
037.[LatencyMS_Min] [intNULL,
038.[LatencyMS_Avg] [intNULL,
039.[LatencyMS_Max] [intNULL,
040.CONSTRAINT [PK_SQLIO_TestPass] PRIMARY KEY CLUSTERED
041.(
042.[TestPassID] ASC
043.)
044.GO
045. 
046.CREATE PROCEDURE [dbo].[USP_Import_SQLIO_TestPass]
047.@ServerName         NVARCHAR(50),
048.@DriveQty           INT,
049.@DriveRPM           INT,
050.@DriveRaidLevel     NVARCHAR(10),
051.@TestDate           DATETIME,
052.@SANmodel           NVARCHAR(50),
053.@SANfirmware        NVARCHAR(50),
054.@PartitionOffset    INT,
055.@Filesystem         NVARCHAR(50),
056.@FSClusterSizeBytes INT
057.AS
058.SET nocount off
059. 
060.IF @TestDate IS NULL
061.SET @TestDate = Getdate()
062. 
063./* Add a blank record to the end so the last test result is captured */
064.INSERT INTO dbo.SQLIO_Import
065.(ParameterRowID,
066.ResultText)
067.VALUES
068.(0,
069.'');
070. 
071./* Update the ParameterRowID field for easier querying */
072.UPDATE dbo.sqlio_import
073.SET    parameterrowid = (SELECT   TOP 1 rowid
074.FROM     dbo.sqlio_import parm
075.WHERE    parm.resulttext LIKE '%\%'
076.AND parm.rowid <= upd.rowid
077.ORDER BY rowid DESC)
078.FROM   dbo.sqlio_import upd
079. 
080./* Add new SQLIO_TestPass records from SQLIO_Import */
081.INSERT INTO dbo.sqlio_testpass
082.(servername,
083.driveqty,
084.driverpm,
085.driveraidlevel,
086.testdate,
087.sanmodel,
088.sanfirmware,
089.partitionoffset,
090.filesystem,
091.fsclustersizebytes,
092.sqlio_version,
093.threads,
094.readorwrite,
095.durationseconds,
096.sectorsizekb,
097.iopattern,
098.iosoutstanding,
099.buffering,
100.filesizemb,
101.ios_sec,
102.mbs_sec,
103.latencyms_min,
104.latencyms_avg,
105.latencyms_max)
106.SELECT   @ServerName,
107.@DriveQty,
108.@DriveRPM,
109.@DriveRaidLevel,
110.@TestDate,
111.@SANmodel,
112.@SANfirmware,
113.@PartitionOffset,
114.@Filesystem,
115.@FSClusterSizeBytes,
116.(SELECT REPLACE(resulttext,'sqlio ','')
117.FROM   dbo.sqlio_import impsqlio_version
118.WHERE  imp.rowid + = impsqlio_version.rowid) AS sqlio_version,
119.(SELECT LEFT(resulttext,(Charindex(' threads',resulttext)))
120.FROM   dbo.sqlio_import impthreads
121.WHERE  imp.rowid + = impthreads.rowid) AS threads,
122.(SELECT Upper(Substring(resulttext,(Charindex('threads ',resulttext)) + 8,
123.1))
124.FROM   dbo.sqlio_import impreadorwrite
125.WHERE  imp.rowid + = impreadorwrite.rowid) AS readorwrite,
126.(SELECT Substring(resulttext,(Charindex(' for',resulttext)) + 4,
127.(Charindex(' secs ',resulttext)) - (Charindex(' for',resulttext)) - 4)
128.FROM   dbo.sqlio_import impdurationseconds
129.WHERE  imp.rowid + = impdurationseconds.rowid) AS durationseconds,
130.(SELECT Substring(resulttext,7,(Charindex('KB',resulttext)) - 7)
131.FROM   dbo.sqlio_import impsectorsizekb
132.WHERE  imp.rowid + = impsectorsizekb.rowid) AS sectorsizekb,
133.(SELECT Substring(resulttext,(Charindex('KB ',resulttext)) + 3,
134.(Charindex(' IOs',resulttext)) - (Charindex('KB ',resulttext))- 3)
135.FROM   dbo.sqlio_import impiopattern
136.WHERE  imp.rowid + = impiopattern.rowid) AS iopattern,
137.(SELECT Substring(resulttext,(Charindex('with ',resulttext)) + 5,
138.(Charindex(' outstanding',resulttext)) - (Charindex('with ',resulttext)) - 5)
139.FROM   dbo.sqlio_import impiosoutstanding
140.WHERE  imp.rowid + = impiosoutstanding.rowid) AS iosoutstanding,
141.(SELECT REPLACE(CAST(resulttext AS NVARCHAR(50)),'buffering set to ',
142.'')
143.FROM   dbo.sqlio_import impbuffering
144.WHERE  imp.rowid + = impbuffering.rowid) AS buffering,
145.(SELECT Substring(resulttext,(Charindex('size: ',resulttext)) + 6,
146.(Charindex(' for ',resulttext)) - (Charindex('size: ',resulttext)) - 9)
147.FROM   dbo.sqlio_import impfilesizemb
148.WHERE  imp.rowid + = impfilesizemb.rowid) AS filesizemb,
149.(SELECT RIGHT(resulttext,(Len(resulttext) - 10))
150.FROM   dbo.sqlio_import impios_sec
151.WHERE  imp.rowid + 11 = impios_sec.rowid) AS ios_sec,
152.(SELECT RIGHT(resulttext,(Len(resulttext) - 10))
153.FROM   dbo.sqlio_import impmbs_sec
154.WHERE  imp.rowid + 12 = impmbs_sec.rowid) AS mbs_sec,
155.(SELECT RIGHT(resulttext,(Len(resulttext) - 17))
156.FROM   dbo.sqlio_import implatencyms_min
157.WHERE  imp.rowid + 14 = implatencyms_min.rowid) AS latencyms_min,
158.(SELECT RIGHT(resulttext,(Len(resulttext) - 17))
159.FROM   dbo.sqlio_import implatencyms_avg
160.WHERE  imp.rowid + 15 = implatencyms_avg.rowid) AS latencyms_avg,
161.(SELECT RIGHT(resulttext,(Len(resulttext) - 17))
162.FROM   dbo.sqlio_import implatencyms_max
163.WHERE  imp.rowid + 16 = implatencyms_max.rowid) AS latencyms_max
164.FROM     dbo.sqlio_import imp
165.INNER JOIN dbo.sqlio_import impfulltest
166.ON imp.rowid + 20 = impfulltest.rowid
167.AND impfulltest.resulttext = ''
168.WHERE    imp.rowid = imp.parameterrowid
169.AND (SELECT Substring(resulttext,(Charindex('size: ',resulttext)) + 6,
170.(Charindex(' for ',resulttext)) - (Charindex('size: ',resulttext)) - 9)
171.FROM   dbo.sqlio_import impfilesizemb
172.WHERE  imp.rowid + = impfilesizemb.rowid) > 0
173.ORDER BY imp.parameterrowid
174. 
175./* Empty out the ETL table */
176.DELETE dbo.sqlio_import
177. 
178.SET nocount off
179.GO

The script creates three things:

  • A table called SQLIO_Import where our results will first be dumped from the text file before they're processed
  • A table called SQLIO_TestPass where our results will be permanently stored in a report-friendly format
  • A stored procedure called USP_Import_SQLIO_TestPass. The stored procedure takes the data from our imported text file, parses it, and inserts records into the SQLIO_TestPass table.

The stored procedure expects these parameters, which it uses when inserting into SQLIO_TestPass. None of these have to be formatted a specific way - as long as they fit the SQL Server field definitions, they're fine:

  • @ServerName NVARCHAR(50) - the name of the server you're using for testing. This is mostly useful for servers with locally attached storage as opposed to SAN storage.
  • @DriveQty INT - the number of drives in the array you're testing.
  • @DriveRPM INT - you might be testing 10k and 15k variations of the same setup. I suggest lumping drives into categories - don't try to differentiate between drives that report 10,080 RPM or other odd numbers - just stick with 5400, 10000, 150000, etc. For SSD, I prefer to use a number for the generation of SSD, like 1 or 2.
  • @DriveRaidLevel NVARCHAR(10) - raid 5, raid 0, raid 10, raid DP, etc. (Yes, there are vendor-specific RAID implementations that use letters instead of numbers.)
  • @TestDate DATETIME - the date you're running the tests. I include this as a parameter because sometimes I've run the same tests on a quarterly basis and I want to track whether things are changing over time.
  • @SANmodel NVARCHAR(50) - the type of SAN, such as an IBM DS4800 or EMC CX300.
  • @SANfirmware NVARCHAR(50) - the version of firmware, which can impact SAN performance.
  • @PartitionOffset INT - Windows systems can use DISKPART to offset their partitions.
  • @Filesystem NVARCHAR(50) - usually NTFS. Can be used to track testing different filesystems.
  • @FSClusterSizeBytes INT - the file system cluster size.

Now that our framework is in place, let's import our first round of results.

Importing the Text File into SQL Server 2005

  • In SQL Server Management Studio, right-click on the database where you want to store the SQLIO performance data and click Tasks, Import Data.
  • For Data Source, choose "Flat File Source". Browse to your results.txt file, and set the Format to Delimited, Text Qualifier to None, Header row delimiter to {CR}{LF}, and Header Rows to Skip to 0.
  • Click on the Advanced tab on the left, and there should only be one column, Column 0. Set the DataType to text stream. Click Next.
  • Your database server and storage database should be shown. Click Next.
  • For the Destination Table, choose SQLIO_Import and click Edit Mappings. Set the Column 0 destination to be ResultText. Click OK, and click Next.
  • Click Next until the wizard finishes and imports the data, and then close the wizard.

Open a new query and verify that the data was successfully imported by typing:

1.SELECT * FROM dbo.SQLIO_Import

If there's no rows, something went wrong with the import process. Stop here and troubleshoot. Otherwise, execute the stored procedure to move the data into the reporting table, like this:

01.EXECUTE [dbo].[USP_Import_SQLIO_TestPass]
02.'MyServerName'
03.,10
04.,15000
05.,'RAID 10'
06.,'2008/5/6'
07.,'IBM DS4800'
08.,'6.62'
09.,1024
10.,'NTFS'
11.,'64000'

The data will be parsed and inserted into the reporting table, which we can query:

1.SELECT * FROM dbo.SQLIO_TestPass

Now, the data is arranged in a way that's easier for reporting.

Analyzing the SQLIO Results

Start by analyzing the data to find the fastest SAN throughput:

1.SELECT * FROM dbo.SQLIO_TestPass ORDER BY MBs_Sec DESC

That column is the SAN's throughput. Notice the LatencyMS_Avg column, though, which indicates the milliseconds of latency. A high throughput number is not necessarily good if the system is taking a long time to respond. Look for five or so of the highest throughput numbers that represents a good balance of high throughput and low latency.

Then, look at the parameters that were used in order to achieve that throughput. Make a note of these, because if you need to do repeated throughput testing on your SAN to test different configurations, you can use these high-throughput configurations to run a much shorter testing pass. Instead of testing your SAN overnight, you can get a quick idea in a matter of minutes because you'll know what parameters tend to drive very high throughput on your SAN. You'll still want to test all parameters when possible, because changes to the SAN may affect how it handles other parameters, but this will give you a quick estimate of whether things are getting better or worse.

More Reading About SQLIO

Here's a few interesting links about other bloggers' experience with SQLIO:

posted by LifeisSimple
2010. 11. 10. 14:41 Brain Trainning/Storage

SQLIO Tips and Tricks for LOB Disk Performance Testing

RATE THIS

At the request of one of our blog readers, I wanted to share several tips, tricks, and other basic considerations useful to our technology community (for my earlier post on using SQLIO to baseline performance, see http://blogs.technet.com/b/lobapps/archive/2010/09/03/using-microsoft-sqlio-to-baseline-lob-application-disk-performance.aspx). I’ve grouped these into Use Cases and Runtime Practices.

   

SQLIO Use Cases

   

You might wonder if SQLIO is only supported on systems running Microsoft’s Operating Systems or SQL Server databases. In short:

  

  • SQLIO does indeed require a Windows-based host from which to execute. An inexpensive client device is fine in this regard, though. That is, any appropriately configured desktop, laptop, or server that is connected to the set of disks you’re testing is all that’s required. The client device simply needs to “see” the various LUNs configured on the disk subsystem, i.e. mapped drives.
  • Thus, SQLIO can be used to baseline a disk subsystem that will ultimately be deployed in a Unix, Linux, or Windows environment.
  • Of course, you can’t expose the drive in such a way unless it’s formatted for NTFS…begging the question “why use SQLIO to test this disk subsystem?” when you’re faced with re-presenting and re-formatting LUNs just for testing. In that case, there are other tools you might consider. But once you’re comfortable using SQLIO – a fast, predictable, and very easy method of baselining disk performance – you might find the couple of extra steps not only worth your time but also a way to give you some extra peace of mind. 

 

I also like using SQLIO to not only test but compare disk subsystems to one another. These disk subsystems could eventually host Windows-based Oracle or DB2 databases as well as SQL Server.

   

·        SQLIO can be configured to “emulate” (I’m using this term really loosely) any RDBMS at a low level in that SQLIO can be easily configured to read or write in various block sizes. SQL Server 6.5, anyone? Configure SQLIO to test 2kb blocks. Basic SQL and Oracle testing? Go with 8kb blocks. Want to simulate big-block performance (for backup/restore activities, for example)? Configure for 64kb, 256kb, 1MB, you get the picture….

·        The idea is to quantify what the disks are capable of pushing or supporting at max, knowing that your Oracle, DB2, or SQL databases won’t achieve this number (due to overhead and a whole slew of other factors).

·        Knowing these maximum numbers will give you the ability to compare disk subsystem changes in an apples-to-apples kind of way. Got a disk controller firmware upgrade coming up? Validate the impact the FW change has (before upgrade vs after upgrade) to get a sense of whether your SQL or Oracle database performance might be impacted.

·        Want to compare your how disks hosted on Microsoft Azure or Amazon EC2 (S3, that is) compare to your current SAN configuration? Run SQLIO against them (keeping in mind that you ALSO really need to do other cloud performance and throughput testing out of the scope of this blog posting…).

·        Also - and this is hugely important - tests using tools like SQLIO are naturally akin to "smoke testing" tools rather than tools useful for testing online/interactive user performance. Sure, you can change queue depth related to threads (explained in my previous SQLIO post), but this may or may not emulate users well. So when you're looking at your SQLIO test results, think about how those results might reflect hard-running SAP batch processes rather than online SAP users..... 

 

I also find SQLIO useful to compare different database designs against one another. Many of my older SAP-on-SQL customers have deployed R/3 or ECC across 3 LUNs or datafiles (the SAP/SQL installation default), for example, while others have chosen 4, 6, 8, 12, 16, and other numbers. While SQLIO can easily be setup to execute against a single data file (to make for really easy testing), such an approach does not reflect real-world LUN configurations. Thus, SQLIO’s ability to execute tests against multiple data files makes for not only a better baseline test, but a great way to compare the maximum throughput of different configurations.

  

SQLIO Runtime Practices

   

Manage your RUNSQL.BAT files. A typical batch file used to execute SQLIO might look like this: sqlio -Fpfile.txt –b8 -o32 –kR -frandom –s60 (execute sqlio, use a parameter file named pfile.txt to point to your datafile or datafiles, use an 8kb block size, use 32 threads [by the way, you will want to reduce this to 4, 8, or 16 if you're running 16 or more datafiles], execute read tests, make these random read tests, and run the test for 60 seconds). Oftentimes I’ll combine 4 of these lines into a single RUNSQL.BAT file (one to do random reads, another for sequential reads, one for random writes, and another for sequential writes), and create different versions based on other factors. For example, I often create a collection of RUNSQL.BAT files configured to execute 8kb reads and writes, and another collection for 64kb. And if I’m testing different SQL datafile configurations (i.e. 3 datafiles vs 6 vs 16), I’ll create a BAT file for each use case.

 

Avoid busy times when testing. In the best of cases, you want to test your new disk subsystem (or configuration change) while no one or nothing else is accessing it. In the real world, this isn’t always possible. Running SQLIO is EXTREMELY disk intensive, though, so execute it with care when other users are on the system – they WILL notice. And keep in mind that if you’re running SQLIO while another load is concurrently using the disk system, your results will be skewed (lower!), perhaps much lower than what you would have observed otherwise. So do what you can to test off-hours, preferably during a scheduled downtime window or on another non-production disk subsystem identically configured to production (smart customers deploy “staging” or “pre-production” systems to support this kind of promote-to-production change management and testing).

 

Standardize your data file size. I started using SQLIO back in 1997, when 100GB SAP databases were considered pretty sizable. During that time, I “standardized” my SQLIO testing around creating 5GB data files. Standardizing in this way helped me compare the test results from new customers with those I obtained from previous customers, making it easy for me to compare various disk configurations (hardware models, LUN/group layouts, RAID selections i.e. RAID 1 vs 10 vs 5, and so on). If you’re going to start using SQLIO and see yourself using it for a while, make sure you go with the same size (and number, if applicable!) of data files. Edit the pfile.txt configuration file to reflect the various data file locations and sizes – one line in the configuration file per data file to be used in the testing.

 

Pipe the output. Instead of running run.cmd from a command line and manually recording the output statistics from each run, pipe the results to an output file which is not only saved to disk but can be later dumped into Excel for some real analysis. For example, from the command line you might enter “run.cmd > output-run1.txt” to dump the results from the individual performance tests specified in your run.cmd into a file called output-run1.txt. On the other hand, you might dump all test results into the same output file (which is both possible and convenient, given that “piped” test run output automatically appends itself to any content already present in the output file).

 

Test each use case twice or more. Don’t just test once and be done – execute each SQLIO test at least twice to ensure the results are pretty consistent (especially useful if you’re running on a disk subsystem currently being used by other users or processes). And don’t ever take the test results that immediately follow the datafile creation process – it’s naturally (and greatly) skewed towards lower performance.

   

Don’t forget to test your Log file LUN. Oftentimes we get so focused on verifying our datafiles are optimally configured for performance that we forget about testing the LUN(s) used to host the SQL logs (or Oracle redo and archive logs….). Test them all (using the appropriate block size, i.e. 4kb). And if you really want to do a thorough job, test the LUNs hosting your Windows pagefiles. 

  

Corroborate your results. Like I’ve written in other posts, I’m a big believer in using multiple tools to help ensure my results are fairly consistent. In the past, if I was using a tool like SQLIO, I might also employ NTiogen, SQLIOstress, WinBench, Iometer, or another tool just to give me a bit more confidence in the overall numbers. The testing process then becomes something akin to executing a suite of tests. This seemed especially important when it came to testing completely different kinds of disk solutions (such as local drives vs SANs, or network appliances vs Amazon S3 vs Azure or other cloud hosting models). The more consistency in your output, the more sound your results and recommendations... which should ultimately translate into improved real-world application performance.    

 

As tempting as it may be, don't jump to conclusions too quickly! Once you've started using SQLIO, you'll be tempted to quickly start comparing the results of one test with the results of others. If you've only changed ONE factor between tests, feel free to draw all the conclusions you like. But if you start testing different SANs or disk subsystems against one another, you'll need to keep in mind that things like cache size, cache configuration, the number and speed of HBAs, underlying RAID heuristics, subtle differences in disk controllers, switches, and disk drives, and so much more may (and do!) have potentially a very dramatic effect on your test's results.   

  

I’m sure there’s more….so stay tuned and check back on this posting every now and then. As I think of other SQLIO tips and tricks I’ll continue to append them here…

 (added content 10/3 and 10/7) 

Have a great week,

   

george


출처 : http://blogs.technet.com/b/lobapps/archive/2010/09/27/using-sqlio-to-baseline-performance-tips-and-tricks.aspx


'Brain Trainning > Storage' 카테고리의 다른 글

Capacity-Optimized Nearline SAS Drives Challenge SATA Part-1  (0) 2010.11.17
DAS, NAS, SAN  (0) 2010.11.17
Testing Windows IO with SQLIO and SysBench  (0) 2010.11.10
SQLIO 관련 페이지  (0) 2010.11.10
디스크타입별 IOPS  (0) 2010.10.17
posted by LifeisSimple
2010. 11. 10. 14:39 Brain Trainning/Storage

http://www.apacheserver.net/Testing-Windows-IO-with-SQLIO-and-SysBench-i29230.htm

Testing Windows IO with SQLIO and SysBench

To benchmark IO on Linux and MySQL transaction processing, SysBench is a popular choice that can do both. After poking around at the source code, it seems PostgreSQL and Oracle are also included for transaction processing testing if you have the proper header files, but I didn't test those.

To benchmark IO on Windows and SQL Server transaction processing, Microsoft provides two tools, SQLIO and SQLIOSim. SQLIO is a misnomer in that it really doesn't have much to do with SQL Server. It is a general purpose disk IO benchmark tool.

So today I was playing with SysBench and noticed that I can compile and build it on Windows as well. I decided I should run IO benchmark on a single machine with both tools (SQLIO and SysBench), and see if I could reconcile the results.

To make things simple, I thought I would just benchmark random read of 3G (orders of magnitude bigger than disk controller cache) files for 5 minutes (300 seconds) with a single thread using 16Kb block size, without adding any additional queue. I tested this on both my laptop and an Amazon EC2 instance. The commands for both tools are listed below, and they should perform the same thing, as far as I can tell. Let me know if you have any comments/pointers or if I missed anything.

SysBench commands:

sysbench --test=fileio --file-total-size=3G prepare
sysbench.exe --test=fileio --file-total-size=3G --file-test-mode=rndrd --max-time=300 run

Fro SQLIO, here is the line in param.txt and command used:

c:\testfile.dat 1 0x0 3072
sqlio -kR -s300 -dc -b16 -frandom -Fparam.txt

As this is a quick test, I ran the same test twice and took the average value for comparison purposes. The detailed output is pasted at the end of this post.

On my Windows XP Pro Service Pack 2 laptop with Intel X-25 SSD:

IO/SecondThroughput/Second
SQLIO 3833.5 59.90Mb
SysBench 3390.77 52.98Mb

So on my laptop, SQLIO's results are 13% higher than that of SysBench.

On Amazon EC2 ami-c3e40daa with EBS device running Windows Server 2008 Datacenter Edition Service Pack 2, whose results varied widely between my two runs:

IO/SecondThroughput/Second
SQLIO 678.91 10.61Mb
SysBench 408.96 6.39Mb

On this machine, SQLIO results are 66% higher than that of SysBench.

Below is the gory details.

Here are the detailed output on my laptop:
SQLIO
C:\Program Files\SQLIO>sqlio -kR -s300 -dc -b16 -frandom -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 3835.39
MBs/sec: 59.92

C:\Program Files\SQLIO>sqlio -kR -s300 -dc -b16 -frandom -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 3832.00
MBs/sec: 59.87

SysBench
C:\MessAround\sysbench-0.4.12\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbench.e
xe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 run
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
WARNING: Operation time (18446744073709226000.000000) is greater than maximal co
unted value, counting as 10000000000000.000000
WARNING: Percentile statistics will be inaccurate
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (52.143Mb/sec)
3337.16 Requests/sec executed

Test execution summary:
total time: 2.9966s
total number of events: 10000
total time taken by event execution: 2.9343
per-request statistics:
min: 0.01ms
avg: 0.29ms
max: 18446744073709.47ms
approx. 95 percentile: 0.48ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 2.9343/0.00

C:\MessAround\sysbench-0.4.12\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbench.e
xe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 run
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
WARNING: Operation time (18446744073694841000.000000) is greater than maximal co
unted value, counting as 10000000000000.000000
WARNING: Percentile statistics will be inaccurate
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (53.818Mb/sec)
3444.38 Requests/sec executed

Test execution summary:
total time: 2.9033s
total number of events: 10000
total time taken by event execution: 2.8777
per-request statistics:
min: 0.01ms
avg: 0.29ms
max: 18446744073696.34ms
approx. 95 percentile: 15.39ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 2.8777/0.00

Here are the detailed output from Amazon EC2 ami-c3e40daa with EBS device:
SQLIO
c:\Program Files\SQLIO>sqlio -kR -t1 -s300 -dC -frandom -b16 -Fparam.txt -BH -LS

sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
buffering set to use hardware disk cache (but not file cache)
size of file c:\testfile.dat needs to be: 3221225472 bytes
current file size: 0 bytes
need to expand by: 3221225472 bytes
expanding c:\testfile.dat … done.
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 1230.94
MBs/sec: 19.23
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 204
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 98 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

c:\Program Files\SQLIO>sqlio -kR -t1 -s300 -dC -frandom -b16 -Fparam.txt -BH -LS

sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
buffering set to use hardware disk cache (but not file cache)
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 126.88
MBs/sec: 1.98
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 7
Max_Latency(ms): 497
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 13 9 0 3 7 8 8 8 8 8 8 8 2 1 1 1 1 1 1 1 1 0 0 0 2

C:\Users\Administrator\Documents\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbenc
h.exe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 r
un
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (10.64Mb/sec)
680.95 Requests/sec executed

Test execution summary:
total time: 14.6854s
total number of events: 10000
total time taken by event execution: 14.6048
per-request statistics:
min: 0.01ms
avg: 1.46ms
max: 150.29ms
approx. 95 percentile: 4.77ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 14.6048/0.00

C:\Users\Administrator\Documents\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbenc
h.exe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 r
un
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (2.1371Mb/sec)
136.77 Requests/sec executed

Test execution summary:
total time: 73.1139s
total number of events: 10000
total time taken by event execution: 73.0284
per-request statistics:
min: 0.02ms
avg: 7.30ms
max: 728.84ms
approx. 95 percentile: 23.08ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 73.0284/0.00

'Brain Trainning > Storage' 카테고리의 다른 글

DAS, NAS, SAN  (0) 2010.11.17
SQLIO Tips and Tricks for LOB Disk Performance Testing  (0) 2010.11.10
SQLIO 관련 페이지  (0) 2010.11.10
디스크타입별 IOPS  (0) 2010.10.17
장치 대역폭 목록  (0) 2010.09.13
posted by LifeisSimple
2010. 11. 10. 14:33 Brain Trainning/Storage
출처는 링크로

SAN Performance Tuning with SQLIO


SQLIO is a free utility from Microsoft that measures storage IO performance. The name "SQLIO" is horribly misleading, because it doesn't really have anything to do with SQL Server.

Contents

[hide]

SQLIO Video Tutorial

In this ten-minute video, Brent Ozar explains how to get started with SQLIO, how to create a batch file to test your SAN, and how to analyze the results.

http://sqlserverpedia.com/wiki/SAN_Performance_Tuning_with_SQLIO

Benchmarking SQL Server IO with SQLIO

Written By: Andy Novick -- 10/4/2010 -- read/post comments -- print -- Bookmark and Share 

Rating:  Rate 

Problem
During a recent SQL Server install everything seemed to go fine.  SQL Server 2008 R2 went in with no problems.  There was a local drive with a RAID 1 set for the OS and system databases and two direct attached RAID 5 sets; one for transaction log files and one for data files.  Was everything okay?  Could it be better?  How could I tell?  In this tip we look at how to maximize your IO throughput.

Solution
There are many factors that go to make up the performance of SQL Server: CPU, RAM, disk I/O performance and, most importantly, the T-SQL code and queries.   The Control Panel Device Manager tells you how fast the CPU's are, how much memory is available and the memory speed.  The performance of queries is very important, but that's the subject of numerous other articles.  It's easy to tell the size of the disk, but not so easy to understand how many I/O operations the disks are capable of achieving.  The variety of technologies, such as controllers, HBA's, SAN's, virtualization and iSCSI, that can be employed between the Windows operating system (OS) and the physical drives is numerous and clouds any analysis based on the hardware specs.  

While disk drives are often purchased to provide the right amount of space, the I/O capacity of the drives is often neglected.  This leaves SQL Server in the unfortunate position of needing more I/O operations than the disks can provide.  The least that we can do is get the most out of the drives that we have and understand their capacity.  That's done with benchmarking.

http://www.mssqltips.com/tip.asp?tip=2127&sa=Search


SQL Server Best Practices Article

Published: June 5, 2007

Writer: Mike Ruthruff

Contributors: Michael Thomassy, Prem Mehra

Technical Reviewers: Robert Dorr, Stuart Ozer, Juergen Thomas, Lindsey Allen, Burzin Patel, Bill Emmert, Bruce Worthington, Sanjay Mishra

Applies To: SQL Server 2005

Summary: The I/O system is important to the performance of SQL Server. When configuring a new server for SQL Server or when adding or modifying the disk configuration of an existing system, it is good practice to determine the capacity of the I/O subsystem prior to deploying SQL Server. This white paper discusses validating and determining the capacity of an I/O subsystem. A number of tools are available for performing this type of testing. This white paper focuses on the SQLIO.exe tool, but also compares all available tools. It also covers basic I/O configuration best practices for SQL Server 2005.

On This Page

Overview 
Determining I/O Capacity 
Disk Configuration Best Practices & Common Pitfalls 
SQLIO 
Monitoring I/O Performance Using System Monitor 
Conclusion 
Resources 

http://msdn.microsoft.com/en-us/library/cc966412.aspx


SQLIO - tool to meausure IO performance on Microsoft SQL Server


Attached is the performance testing tool from Microsoft I mentioned that will give you a test on the number of IO’s that the Fusion-io ioDrive can do on an SQL database.  You can also use it as a tool for multiple IO patterns for other types of database data patterns.
 
Unzip the attached sqlio.zip file in a directory on your hard disk drive. 
 
It will create a directory called SQL. 

http://kb.pixel.com.au/kb/index.php?View=entry&EntryID=10

'Brain Trainning > Storage' 카테고리의 다른 글

SQLIO Tips and Tricks for LOB Disk Performance Testing  (0) 2010.11.10
Testing Windows IO with SQLIO and SysBench  (0) 2010.11.10
디스크타입별 IOPS  (0) 2010.10.17
장치 대역폭 목록  (0) 2010.09.13
하드디스크 속도  (0) 2010.09.13
posted by LifeisSimple
2010. 11. 10. 11:11 Brain Trainning/DataBase

출처 : mssqltips

Problem
During a recent SQL Server install everything seemed to go fine.  SQL Server 2008 R2 went in with no problems.  There was a local drive with a RAID 1 set for the OS and system databases and two direct attached RAID 5 sets; one for transaction log files and one for data files.  Was everything okay?  Could it be better?  How could I tell?  In this tip we look at how to maximize your IO throughput.

Solution
There are many factors that go to make up the performance of SQL Server: CPU, RAM, disk I/O performance and, most importantly, the T-SQL code and queries.   The Control Panel Device Manager tells you how fast the CPU's are, how much memory is available and the memory speed.  The performance of queries is very important, but that's the subject of numerous other articles.  It's easy to tell the size of the disk, but not so easy to understand how many I/O operations the disks are capable of achieving.  The variety of technologies, such as controllers, HBA's, SAN's, virtualization and iSCSI, that can be employed between the Windows operating system (OS) and the physical drives is numerous and clouds any analysis based on the hardware specs.  

While disk drives are often purchased to provide the right amount of space, the I/O capacity of the drives is often neglected.  This leaves SQL Server in the unfortunate position of needing more I/O operations than the disks can provide.  The least that we can do is get the most out of the drives that we have and understand their capacity.  That's done with benchmarking.

My favorite tool for benchmarking is the free tool SQLIO from Microsoft, which can be downloaded from Microsoft downloads here.  There are alternatives, such as IOMeter.  IOMeter was originally written by Intel, but is now open source and can be downloaded from www.IOMeter.org. It has the advantage that there are versions for Linux, Solaris and even Netware.  Since SQL Server runs on Windows all we need is SQLIO, and becoming familiar with a single tool has a lot of advantages.  There are others from Microsoft such as SQLIOSim and SQLIOStress that are tools for validating the robustness rather then the performance of disk systems.

After running setup, find the install directory which is usually "c:\program files\SQLIO" on 32 bit systems or "c:\Progam Files (x86)\SQLIO" on 64 bit systems.  For Intel Windows, SQLIO is a 32 bit program.  There's an advanced version for Alpha that's a 64 bit program.

Configure Param.txt

The first step is to modify the file param.txt to tell SQLIO where to find its test file which is named testfile.dat.  By default the file will be in the same directory as the program file, but you probably don't need to benchmark your C: drive. Instead I'm going to benchmark the L: drive, one of the direct attached RAID 5 sets.

Param.txt has 4 parameters for each line.

ParameterDescriptionValues
file name Name of the test file L:\testfile.dat
number of threads Size of the thread pool.  This will be overridden by the command line later. 8
mask Affinity mask to bind operations against this file to particular CPU's. I don't use this and just use 0x0 for all CPU's 0x0
file size The size in MB.  This should be roughly the size of your database and always larger than your cache, unless you want to test just the cache.  I usually use a 100 GB file, but 20 GB (20480 MB) might be okay if your short on space. 102400

I generally test only one drive at a time, so I only put one line into this file.  You can put several lines each for its own test file.  Lines with # are treated as comments. My typical param.txt file for testing the L drive would be:

L:\testfile.dat 8 0x0 102400

Create Testfile

The next step is to run SQLIO once, so that it can create the testfile.dat.  This takes a long time to run and I'd rather keep this run separate from any test runs.  

sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt

You'll see output like this:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
 sqlio v1.5.SG
 parameter file used: param.txt
 file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
 2 threads writing for 5 secs to file l:\testfile.dat
 using 64KB sequential IOs
 enabling multiple I/Os per thread with 4 outstanding
 size of file l:\testfile.dat needs to be: 107374182400 bytes
 current file size: 0 bytes
 need to expand by: 107374182400 bytes
 expanding l:\testfile.dat ...

Expansion on my system took 15 minutes, so be patient while waiting for this to complete. 

Let's run down the most important command line switches that we'll be using.

SwitchDescriptionExample
-d The drive or drives to work on.  There should already be a testfile.dat on that drive as specified in the param.txt.  There can be multiple drive letters, such as in "-dEFG".  I test only one drive at a time. -DL
-B Controls buffering.  N for no buffering, H for only hardware controller buffering, S for Windows software buffering, or Y for both hardware and software buffering.  To match what SQL Serve is doing, use -BH for just hardware buffering. -BH
-k Operation to perform.  W for write, R for read -kW
-f Random or sequential I/O.  -frandom chooses the block randomly across the file.  -fsequential reads or writes sequentially.  There are additional choices for testing specific block sizes -frandom
-t Threads to run simultaneously. -t2
-o Outstanding requests on each thread.  SQL Server threads issue read requests and then wait, so their typical depths is 1.  However, the Lazy Writer issues many write requests.  So a switch of "-o8" would always keep 8 I/O operations queued on each thread.  You'll have to work on balancing the number of threads and the number of outstanding requests on each thread in order to keep the disk sub system busy.  I usually increase this number until the disks become saturated. -o8
-s Seconds to run.  This should be long enough to fill any cache and then build the queue to it's maximum before processing levels into a steady state.  90 seconds is usually sufficient to get a good picture of what the disks are capable of 90
-b Block size of each I/O in kilobytes.  SQL Server reads and writes to data files in 64K blocks. -b64

There are additional switches available and detailed description of each switch is included in the readme.txt file that serves as the documentation for SQLIO.

Here's the complete output from the initial run of SQLIO used to create the test file:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 5 secs to file l:\testfile.dat
using 64KB sequential IOs
enabling multiple I/Os per thread with 4 outstanding
size of file l:\testfile.dat needs to be: 107374182400 bytes
current file size: 0 bytes
need to expand by: 107374182400 bytes
expanding l:\testfile.dat ... done.
using specified size: 102400 MB for file: l:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 2725.80
MBs/sec: 170.3

At the bottom we see the two most important metrics for analyzing the drives: Input/Output operations per second (IOs/sec) and Megabytes per second (MBs/Sec).  These two numbers are related by the block size used for the test. The L drive in my tests is a RAID 5 set of four 7200 RPM SATA drives. The allocation unit size is 4096 and the partition offset is 32256.  In the article Partition offset and allocation unit size of a disk for SQL Server I described how to determine these numbers and how these can be changed.

Testing

Now to test the L: drive I create a series of tests in a command file (.cmd) and let them run. The number of I/O request that are outstanding at any one time is the product of two switches: -t for threads and -o for outstanding requests. The tests start with one outstanding request per thread and increase the number of threads used until there is one per core in the server and then increases the number of outstanding requests per thread after that.  Each line doubles the number of outstanding requests from the previous line.  What I'm looking for is the maximum number of I/Os that the L: drive can handle.

Random Writes Test

sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat

Here's the output from the run:

C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
1 thread writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 110.06
MBs/sec: 6.87
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
2 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.24
MBs/sec: 6.70
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
4 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 112.67
MBs/sec: 7.04
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
8 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.36
MBs/sec: 6.71

I've extracted the IOs per second and megabytes per second numbers into the table that follows.  It looks like one outstanding request is all that it takes for the drive to reach it's maximum capacity for random writes, because all tests after the first have very similar performance. Random writes are used by the Lazy Writer and by Checkpoints to save pages to disk.  The L: drive can perform roughly 110 of these each second. Since the block size switch (-b) was set to 64K that results in around 7 megabytes per second.

Outstanding I/O sIOs per SecondMB per second
1 110.06 6.87
2 107.24 6.70
4 112.67 7.04
8 107.36 6.71

Random Reads Test

The next set of tests is for random reads.  SQL Server does random reads when doing bookmark lookups or when reading from fragmented tables. Here are the tests that I ran:

sqlio -dL -BH -kR -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o128 -s90 -b64 testfile.dat

Here are the results extracted to a table:

Outstanding I/O sIOs per SecondMB per second
1 100.83 6.30
2 149.90 9.36
4 204.07 12.75
8 261.24 16.32
16 315.76 19.73
32 366.78 22.92
64 420.82 26.30
128 453.46 28.34
256 486.76 30.42
512 488.14 30.50
1024 488.68 20.54

The L: drive plateaus at around 488 I/Os per second, which results in 30.50 MB/sec in throughput.  That's an okay number, but we must remember that when there are a large number of bookmark lookups, that's all the physical I/O's that the L: drive can deliver.  If the table is very large the chances of a page being cached is small and it may take one physical read for each lookup no matter the available memory or cache.

Sequential Reads Test

Performance is best when SQL Server is reading sequentially rather than randomly.  This is achieved when SQL Server is able to do seeks or scans on indexes or heaps that are not fragmented.  To test sequential reads I'll run these tests:

sqlio -dL -BH -kR -fsequential -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o128 -s90 -b64 testfile.dat

Here are the results.

Outstanding I/O sIOs per SecondMB per second
1 3646 227
2 5497 343
4 5469 341
8 5550 346
16 5503 343
32 5577 348
64 5574 348
128 5637 352
256 5611 350
512 5673 354
1024 5629 351

Sequential reads are faster than random reads, because the disk heads often don't have to move to satisfy the request.  On the L: drive the number of reads climbs to around 5600 and plateaus there. 

Sequential Writes Test

There are a variety of times when SQL Server is able to write sequentially.  One of them is when it writes to the transaction log file.  If there is only one log file on the drive, all writes are to the end of the file and the disk heads don't have to move.  I've changed the parameters to use 8K blocks (-b8) to be closer to how log files are used in SQL Server.

sqlio -dL -BH -kW -fsequential -t1 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t2 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t4 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o2 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o4 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o8 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o16 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o32 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o64 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o128 -s90 -b8 testfile.dat

Here are the results:

Outstanding I/O sIOs per SecondMB per second
1 11,510 90
2 14,888 116
4 15,030 117
8 14,911 116
16 15,006 117
32 14,931 116
64 15,062 117

Each of these I/O's are 8K, smaller than the 64K I/O's used in the tests further above and the number of operations rises rapidly to 15,000 and plateaus with 117 megabytes per second. 

Conclusion

One of the keys to achieving this performance is that the SQLIO test is the only activity on the drive while the test is being run. Similar results will pertain to SQL Server, so long as the drive isn't shared with other activity.  The numbers in these tests can be compared to tests run with other ways of formatting the drive, different RAID settings and to other drives to get an idea of the expected performance to help optimize the drive.  I also want to keep these results on hand to re-run if there is a suspicion that there is something wrong with a drive at a later date.  The tests can be re-run and I can quickly verify that there is or is not a change in performance.

Next Steps

Readers Who Read This Tip Also Read
posted by LifeisSimple
prev 1 next