블로그 이미지
LifeisSimple

calendar

1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31

Notice

2011. 3. 28. 11:00 Brain Trainning/Storage

This is from a draft of doc I'm currently writing for my boss - you may find it useful.

Update Host Bus Adapter (HBA) Firmware
Rules
• The DBA must confirm with the SAN Administrator that the HBA firmware is current.

Notes
Generally, these updates can’t be done online while the server accesses data, so it’s better to get the code up to date before the box goes into production. 
Multipathing Software
Rules
• The DBA must confirm with the SAN Administrator that the multipathing SAN software is being used to ensure SQL Server can always communicate with its disk drives.

Notes
A dataserver has to be able to access its drives at all times. When hard drives were directly attached to servers, this wasn’t a big risk, but storage area networks bring in a lot of risky factors. Cables get unplugged or get bent beyond repair. Switches fail. Network configurations don’t go according to plan.
Multipathing software mitigates this risk by enabling the SAN admin to set up multiple routes between a server and its drives. The multipathing software handles all IO requests, passes them through the best possible path, and takes care of business if one of the paths dies.
In the event of a problem like an unplugged cable, the multipathing software will sense that IO has taken too long, then reset the connections and pass the request over an alternate path. The application (like SQL Server) won’t know anything went wrong, but the IO request will take longer than usual to perform. Sometimes in SQL Server, this shows up as an application-level alert that IO has taken more than 15 seconds to complete.
Multipathing and Failover Testing 
Rules
• The DBA must conduct a basic multipathing failover test with the SAN Administrator to ensure that SQL Server operations can continue in the event of a network configuration error when SQL Server communicates with its disk drives.
• The DBA should conduct a SAN zone multipathing failover test with the SAN Administrator to ensure that SQL Server operations can continue in the event of a network configuration error when SQL Server communicates with its disk drives.

Notes
Start a huge file copy to each array, and do them all simultaneously. Go into the datacenter and pull one fiber cable out. Watch to make sure the file copy continues. Some SAN drivers will take 10-15 seconds to fail over, but the file copies should not be disrupted. If they’re disrupted, the multipathing didn’t work. Then plug the cable back in, and pull another cable. Again, the file copy should continue. Finally, while a file copy is running, ask the SAN admin to disable one of the SAN zones for the server – that way, the fiber cable will still be lit up, but the path to the storage will be gone. (This is a tougher failover method than just pulling the fiber.)
HBA Queue Depth Optimisation 
Rules
• The DBA must work with the SAN Administrator to conduct a SQL Server tuning exercise which will determine how close we can come to implementing the SQLCAT recommendations for SQL Server HBA Queue Depth without prohibitively impairing the performance of other SAN consumers.

Notes
The HBA is used to connect your server to the SAN. The HBA Queue depth setting throttles the maximum amount of I/O operations that can simultaneously flow to the SAN from the HBA port. How you modify this setting depends on the brand and model of your HBA. If the queue depth setting value is too low, your SQL Server instance I/O throughput can suffer. 
SQL Server DBAs should work with their company’s SAN administrator to make sure you find a healthy balance between your SQL Server instance’s overall I/O throughput and the SAN’s overall capacity. 
The SQL Server Customer Advisory Team (SQLCAT) recommends increasing it from a typical default of 32 to 64 or even 128 for SQL Servers. The caveat is that it increases the load on the SAN so if it’s a shared SAN you might affect the performance for the other consumers. Testing will therefore be required.
LUN Considerations
Rules
• The DBA should work with the SAN Administrator to ensure that SQL Server disk drives reside on a dedicated LUN.

Notes
Also try to ensure that your log files are on a separate LUN consisting of a dedicated disk. Log files typically are written sequential patterns, unlike data files. Having a LUN with drives shared with another application will not provide optimal IO performance. Your SAN administrator may not permit you to dedicate a separate disk or set of disks to your log.
SAN Caching
Rules
• The DBA must work with the SAN Administrator to establish that the SAN cache will be sufficient for the SQL Server deployment at hand.

Notes
It is also possible that some file or database operations (like a checkpoint) can saturate the cache resulting in degraded read and write performance. Do benchmarking with your SAN vendor to ensure that your SAN cache will be adequate to provide optimal database performance. 
Drive Layout Configuration

Rules
• For medium usage instances, the following drive configuration should be used –
User Database Data F:\MSSQL\<Instance name>\Data
User Database Logs G:\MSSQL\<Instance name>\Logs
TempDB Data and Logs H:\MSSQL\<Instance name>\TempDB
• For heavy usage instances, the following drive configuration should be used –
User Database Data F:\MSSQL\<Instance name>\Data
User Database Logs G:\MSSQL\<Instance name>\Logs
TempDB Data H:\MSSQL\<Instance name>\TempDB\Data
TempDB Logs I:\MSSQL\<Instance name>\TempDB\Logs
• The DBA should work with the SAN Administrator to ensure that a RAID and file separation strategy is chosen that is appropriate for the size of SQL Server instance being deployed.

Notes
The small database server. This would be a database system having a small number of users and a low number of transactions per second. For a system of this type, putting the data files, transaction logs and tempdb database on a single RAID 5 array will be fine. With a load this low, there should be minimal to no performance loss because of having the files on the same array.

The medium database server. This would be a system that has a larger number of users and no more than 100 transactions per second. At this system size, you will begin to separate parts of the database. Start by separating the database files and log files onto different RAID arrays. The tempdb database can probably remain on the same array as the database files. But, if the tempdb database is heavily used for things like lots of temporary table usage, it should be moved to its own drive array.

The large database server. This system is for hundreds of users and hundreds or thousands of transactions per second. When designing systems of this size, break apart not only the database files, log files and tempdb database files, but also want database into smaller data files on multiple drives. This includes moving the indexes to separate files on RAID 1 or RAID 5 arrays, moving blob data (data stored in TEXT, NTEXT, IMAGE, VARCHAR(MAX) and NVARCHAR(MAX) data types) to a separate RAID 5 array. You can also place different types of tables on different drives by assigning them to different file groups.

It is recommended that for larger systems, each file group of the database should have between .25 to one physical file per physical CPU core in the server. The tempdb database should have one data file per physical CPU core in the server.

Most databases are high read and low write and, usually, database files are placed on RAID 5 arrays, which give good disk performance at a reasonable cost. However, because the tempdb is a very high-write database, a RAID 5 array isn't the proper place for it. You should put the tempdb on either a RAID 1 or RAID 10 array as they're optimized for high-write applications. If you can afford additional RAID 1 or RAID 10 arrays for each physical database file for the tempdb, you'll get increased performance.

For best performance, put the tempdb (both data and logs) on its own RAID array or arrays that are not shared with any other parts of the database or system. This includes having other LUNs on the same SAN drives as other LUNs, and it's especially important because if you put other LUNs on the same spindles as the tempdb LUN, it places additional load on those disks. That makes it extremely difficult to troubleshoot from the server side because the server doesn't know about the extra I/O on the disks.

Tiered Storage – Select Appropriately
Rules
• The DBA should work with the SAN Administrator to ensure that the tier of SAN selected for hosting SQL Server operational files is appropriate for the business requirements that the deployment aims to meet.

Notes
Tier 1 Storage
For business-critical 24x7 databases; file servers and email applications; and data warehouses; a redundant, cache-based tiered storage model—called tier 1 storage—is the best option. The tier 1 storage model offers quick response time and fast data transfer rates. As such, tier 1 storage is a great solution for companies that need to effectively store high performance data that demands high availability. 

Tier 2 Storage
For seldom-used, non-critical databases—historical data for instance—a tier 2 storage model is a great option since tier 2 data can generally be stored on less expensive media in storage area networks (SAN). Tier 2 storage is a good solution for organizations that have a large amount of data that doesn't require 24x7 availability or extensive backup. Tier 2 storage can also help reduce hardware costs and management overhead.

Tier 3 Storage
For rarely accessed data, a tier 3 storage model offers further economies of scale since data can be stored on even less expensive media such as recordable compact discs. Tier 3 storage is a convenient and simple way for IT administrators to protect large amounts of non-critical data from fire, theft, and computer malfunctions.
posted by LifeisSimple
2011. 3. 25. 16:51 Brain Trainning/Storage
출처 : http://social.technet.microsoft.com/wiki/contents/articles/sqlio-disk-subsystem-benchmark-tool-troubleshooting-hangs-and-errors.aspx

SQLIO Disk Subsystem Benchmark Tool: Troubleshooting Hangs and Errors

The SQLIO Disk Subsystem Benchmark Tool is a tool provided by Microsoft (SQLIO from Microsoft Download), which can be used to determine the I/O capacity of a given configuration. However, it comes with the disclaimer that SQLIO is provided ‘as is’ and there is no support offered for any problems encountered when using the tool. Please refer to the EULA.doc for the license agreement prior to using this tool.

Tips on how to prevent or resolve issues when running the SQLIO tool, especially on Windows 7 and Windows Server 2008 R2:

  1. Ensure that you have created an exception for SQLIO in Data Execution Prevention (DEP). In Windows 7 or Windows Server 2008 R2, you can create exceptions in DEP by opening System Properties (sysdm.cpl) and then clicking Advanced. Then, in Performance, click Settings. Click Data Execution Prevention. IfTurn on DEP for all programs and services except those I select is selected, click Add, then type the full path to SQLIO.EXE, including SQLIO.EXE. For example, C:\Program Files (x86)\SQLIO\SQLIO.EXE. If the other option, Turn on DEP for essential Windows programs and services only is selected, you do not need create an exception .

  • If your SQLIO commands contain -fsequential and SQLIO is crashing, ensure you define a params.txt file that sets the test file size before testing.
  • Ensure that all programs that are operating or could operate on the target drive for testing are off or suspended during the test. Two common examples include:
    1. Ensure that you configure exceptions in any virus scanning software that you have running or that could run during your test for both SQLIO.EXE and the target volume of the test.
    2. Ensure that you disable any drive indexing software. 

 To learn more about SQLIO, see the following related references:

posted by LifeisSimple
2011. 3. 23. 13:50 Brain Trainning/Storage

출처 : http://www.sqlteam.com/article/benchmarking-disk-io-performance-size-matters

Benchmarking Disk I/O Performance: Size Matters!

By Linchi Shea on 30 October 2007 | 9 Comments | Tags: Performance TuningDisk Tuning


This article discusses the impact of test file size when benchmarking SQL Server disk performance. (Editors Note: This is our first article by Linchi Shea.  Linchi is a SQL Server MVP and an expert on SQL Server hard drive performance.  The article addresses the impact of test file size when using SQLIO.EXE to test hard drive performance and how an incorrect file size can give misleading results.)

If you use a tool such as IOMeter or sqlio.exe--both of which are freely downloadable--to benchmark the I/O performance of a disk, you typically would configure the tool to generate I/Os against a test file that is placed on the disk.

For instance, with sqlio.exe the test file is specified on the command line as follows:

sqlIO -kR -t1 -s30 -dE -o32 -frandom -b8 -BH -LS testfile.dat

In this case, you are benchmarking 8K random reads against the testfile.dat file on the E drive. The sqlio.exe utility will issue the 8K read requests from a single thread and will maintain an I/O queue depth of 32.

The question is: Does the size of the test file matter? If so, what size should you pick?

Benchmark results may lie

I was interested in comparing the 8K random reads performance between the E drive and the F drive on one of my machines. So, I first ran the above command line, and that gave me the following results:

E drive:

IOs/sec: 2,681
MBs/sec: 21
Min_Latency (ms): 0
Avg_Latency (ms): 11
Max_Latency (ms): 310

I then changed the -d parameter value from E to F, and re-ran the command line, and obtained the following results:

F drive:

IOs/sec: 19,871
MBs/sec: 155
Min_Latency (ms): 0
Avg_Latency (ms): 1
Max_Latency (ms): 16

From these results, it is more than obvious that the F drive was far more performant than the E drive, at least for 8K random reads.

Unfortunately, that conclusion would be terribly far from the truth!

"Wait a minute and stop right there! Are you telling me that a drive that can do ~20,000 reads per second is not significantly better than a drive that does only 2600 reads a second?"

Well, that's not what I'm telling you. What I failed to disclose in the little comparison tests described above was that the test file on the F drive was 256MB in size, whereas the test file on the E drive was 128GB in size. It turns out that the size of the test file has such a dramatic impact on the resulting I/O metrics that I was literally comparing apples and oranges.

Running the same sqlio.exe command line against a 256MB test file on the E drive produced to the following results:

IOs/sec: 16,635
MBs/sec: 130
Min_Latency (ms): 0
Avg_Latency (ms): 1
Max_Latency (ms): 29

The F drive still performed better than did the E drive. But instead eight times better, the F drive was about 20% better in I/Os per second.

The impact of test file size

To drive home the fact that the size of the test file can have an overwhelming impact on the disk I/O benchmark results, I ran the following identical command line repeatedly with the testfile.dat set to different sizes:

sqlIO -kR -t1 -s30 -dE -o32 -frandom -b8 -BH -LS testfile.dat

The sizes of testfile.dat I tested included the following:

  • 256MB
  • 512MB
  • 2GB
  • 8GB
  • 32GB
  • 64GB
  • 128GB

The chart below illustrates the impact on I/Os per second when the size of the test file was the only parameter changed:

For different disk drives, the shape of the curve and the inflexion point will differ. But there is no mistake that it is meaningless to state a disk I/O metric without also disclosing the size of the test file.

Why size matters?

Among many factors that may help render the disk benchmark results sensitive to test file size, the following two are worth noting:

  • Disk seek area
  • Cache size

Before explaining why these two factors are significant, let me note that when you configure sqlio.exe to do random reads (or writes) there is no data locality. In other words, the I/O requests will not concentrate on a subset of the data, but will be distributed over the entire file.

Disk seek area. A larger file generally occupies a larger area on the disk platter than does a smaller file. For random access, this means that the disk arm has to cover more tracks, leading to a higher seek time on average. Since seek time is often the most significant time-consuming component in processing an I/O request, average I/O performance is therefore expected to drop when you increase the test file size.

Cache size. If an I/O request is satisfied with data already in the cache of a disk subsystem, seek time is no longer a factor. In addition, the more I/O requests that are satisfied only from the cache, the higher the average I/O performance. And the smaller the test file relative to the cache, the more likely the requested data will be found in the cache. It therefore follows that the smaller the test file the higher the average I/O performance.

What size to use?

This is a frequently asked question, but unfortunately there is no one-size-fit-all answer. If you read the literature on this topic, you will no doubt find that the most common recommendation is to use a test file that is larger than the cache size of the disk subsystem being benchmarked.

This is generally a good recommendation, but not an absolute truth. It is generally good because as a SQL Server user, you benchmark a disk to make better use of SQL Server for your application databases, and your application databases in general rarely fit entirely into the cache of your disk subsystem.

Then, why would you ever want to use a small test file? The most obvious reason is when you are dealing with databases that are small enough to be cached, and you want to get disk benchmark numbers that are relevant to these databases. There are also specific benchmarking scenarios in which a small test file is a must. Consider the case where you want to measure the read latency between the host OS and the I/O subsystem cache. You should configure your tests to eliminate the influence of the disk media access time, and a test file that is small enough to fit into the cache will do the trick.

posted by LifeisSimple
2011. 3. 23. 11:45 Brain Trainning/Storage
출처 : http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/05/25/parsing-sqlio-output-to-excel-charts-using-regex-in-powershell.aspx

Parsing SQLIO Output to Excel Charts using Regex in PowerShell

Today Joe Webb (Blog|Twitter) blogged about The Power of Regex in Powershell, and in his post he shows how to parse the SQL Server Error Log for events of interest.  At the end of his blog post Joe asked about other places where Regular Expressions have been useful in PowerShell so I thought I’d blog my script for parsing SQLIO output using Regex in PowerShell, to populate an Excel worksheet and build charts based on the results automatically.

If you’ve never used SQLIO, Brent Ozar (Blog|Twitter) has a article on SQLServerPedia titled SAN Performance Tuning with SQLIO that includes a lot of information as well as a video tutorial on its use.  I have been using SQLIO quite a bit this year, mainly to gather performance information for systems before and after reconfiguring their storage to show percent impact information to the business for making changes in storage like migrating from RAID 5 to 10, and aligning disk partitions used for database and log file storage.  I have also used it in benchmarking new SAN configurations and the SAN configuration of our Virtual Machine environment to show team members why I have recommended that a number of our physical servers be migrated onto Virtual Machines.

I generally use a param.txt file similar to:

G:\MSSQL\testfile1.dat 4 0x0 32768

which creates a 32GB test file and then have a DOS batch file that runs the actual SQLIO tests using a script like:

sqlio -kW -t16 -s10 -o8 -fsequential -b8 -BH -LS -Fparam.txt

sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS G:\MSSQL\testfile1.dat > post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

sqlio -kR -t8 -s360 -o8 -frandom -b8 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -frandom -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

sqlio -kW -t8 -s360 -o8 -fsequential -b8 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kW -t8 -s360 -o8 -fsequential -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

sqlio -kR -t8 -s360 -o8 -fsequential -b8 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b32 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b64 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b128 -BH -LS G:\MSSQL\testfile1.dat >> post.txt 
sqlio -kR -t8 -s360 -o8 -fsequential -b256 -BH -LS G:\MSSQL\testfile1.dat >> post.txt

Which runs the gamit of tests using 8, 32, 64, 128, and 256K block sizes, with 16 threads, and 8 pending IO.  You can certainly expand the test set much further than this and do a very exhaustive testing regimen, I am by no means saying that my tests above are the best fit, but they have sufficiently met my needs for the tuning I have been doing.  Once the tests have been run the post.txt file will contain an output entry similar the following for every test that was run.

sqlio v1.5.SG 
using system counter for latency timings, 14318180 counts per second 
16 threads writing for 60 secs to file d:\MSSQL\testfile1.dat 
    using 8KB random IOs 
    enabling multiple I/Os per thread with 8 outstanding 
    buffering set to use hardware disk cache (but not file cache) 
using current size: 2048 MB for file: d:\MSSQL\testfile1.dat 
initialization done 
CUMULATIVE DATA: 
throughput metrics: 
IOs/sec: 18527.91 
MBs/sec:   144.74 
latency metrics: 
Min_Latency(ms): 0 
Avg_Latency(ms): 6 
Max_Latency(ms): 4900 
histogram: 
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ 
%: 78  6  8  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  2 

Linchi Shea (Blog) wrote a blog post three years ago that showed how to Parse the sqlio.exe Output using Perl.  I found this in my search for a smarter way of parsing these long output files I was generating other than manually, but I don’t have Perl installed anywhere and I really felt that I should be able to do this natively using the tools readily available to me on my Windows 7 laptop, so I looked to PowerShell for a solution. 

Reading the file data in is accomplished using Get-Content which returns a collection of strings.  At first I tried casting this to a [string] object directly, but soon found out that when I did this the resulting string was missing the carriage returns and line feeds that existed in the original file.  This posed a couple of problems for me in parsing the outputs initially, but doing a [string]::Join operation using [Environment]::NewLine as the concatenater allowed an appropriately CRLF delimited string to be returned. 

$filedata = [string]::Join([Environment]::NewLine,(Get-Content $FileName))

Now that I had my file data loaded into a the $filedate object, the next step was to split this string based on the “sqlio v1.5.SG” header that is output at the beginning of each test run.  Since PowerShell is .NET based, this is easily accomplished by using the Split() method of the System.String object which is System.Type for the $filedata object.

$Results = $filedata.Split( [String[]]"sqlio v1.5.SG", [StringSplitOptions]::RemoveEmptyEntries )

At first I started to write a foreach loop with this object, lets face it old habits die hard, but instead I made use of piping which allows a collection like the one returned by the Split() method to be worked on in a single operation.  Piping in PowerShell is similar to a manufacturing process where an object gets transformed and then passed on until the ultimate result is achieved.  Once an object is piped, it becomes accessible through the use of the $_ variable name, allowing the object to be transformed along the pipeline.  By piping the result of the Split() I was able to write a SELECT statement that utilized Regex to get the information of interest out of each test run.

     select @{Name="Threads"; Expression={[int]([regex]::Match($_, "(\d+)?\sthreads\s(reading|writing)").Groups[1].Value)}},` 
                @{Name="Operation"; Expression={switch ([regex]::Match($_, "(\d+)?\sthreads\s(reading|writing)").Groups[2].Value) 
                                                { 
                                                    "reading" {"Read"} 
                                                    "writing" {"Write"} 
                                                }    }},` 
                @{Name="Duration"; Expression={[int]([regex]::Match($_, "for\s(\d+)?\ssecs").Groups[1].Value)}},` 
                @{Name="IOSize"; Expression={[int]([regex]::Match($_, "\tusing\s(\d+)?KB\s(sequential|random)").Groups[1].Value)}},` 
                @{Name="IOType"; Expression={switch ([regex]::Match($_, "\tusing\s(\d+)?KB\s(sequential|random)").Groups[2].Value) 
                                                { 
                                                    "random" {"Random"} 
                                                    "sequential" {"Sequential"} 
                                                }  }},` 
                @{Name="PendingIO"; Expression={[int]([regex]::Match($_, "with\s(\d+)?\soutstanding").Groups[1].Value)}},` 
                @{Name="FileSize"; Expression={[int]([regex]::Match($_, "\s(\d+)?\sMB\sfor\sfile").Groups[1].Value)}},` 
                @{Name="IOPS"; Expression={[decimal]([regex]::Match($_, "IOs\/sec\:\s+(\d+\.\d+)?").Groups[1].Value)}},` 
                @{Name="MBs_Sec"; Expression={[decimal]([regex]::Match($_, "MBs\/sec\:\s+(\d+\.\d+)?").Groups[1].Value)}},` 
                @{Name="MinLat_ms"; Expression={[int]([regex]::Match($_, "Min.{0,}?\:\s(\d+)?").Groups[1].Value)}},` 
                @{Name="AvgLat_ms"; Expression={[int]([regex]::Match($_, "Avg.{0,}?\:\s(\d+)?").Groups[1].Value)}},` 
                @{Name="MaxLat_ms"; Expression={[int]([regex]::Match($_, "Max.{0,}?\:\s(\d+)?").Groups[1].Value)}}` 

The select returns a new type of object that has Properties named Threads, Operation, Duration, IOSize, IOType, PendingIO, FileSize, IOPS, MBs_sec, MinLat_ms, AvgLat_ms, and MaxLat_ms.  This alone is sufficient to proceed to creating an Excel file for the information, but I wanted the information sorted in the same manner every time I ran this, primarily because I change how I run my tests sometimes, and I like consistency in the output.  Since I am so anal retentive thorough, I fixed this problem by piping the output from the select to the Sort-Object commandlet and sorted by the IOSize, IOType, Operation, and Threads.

Getting the data into Excel was actually fairly simple to do, thanks to Buck Woody(Blog|Twitter) and SQL Rockstar Tom LaRock(Blog|Twitter).  Buck wrote a blog post titled Create an Excel Graph of your Big Tables – With Powershell, and Tom wrote a Simple Talk Article, Monitoring SQL Server Virtual Log File Fragmentation, that use the Interop Excel objects to create an Excel workbook from PowerShell, populate the workbook with data, and build charts using the data.  My code for the Excel integration is based entirely on their examples.

$Excel = New-Object -ComObject Excel.Application 
$Excel.Visible = $true 
$WorkBook = $Excel.WorkBooks.Add() 
$WorkBook.WorkSheets.Item(1).Name = "RawData" 
$WorkBook.WorkSheets.Item(3).Delete() 
$WorkBook.WorkSheets.Item(2).Delete() 
$WorkSheet = $WorkBook.WorkSheets.Item("RawData") 
$x = 2 
$WorkSheet.Cells.Item(1,1) = "Threads" 
$WorkSheet.Cells.Item(1,2) = "Operation" 
$WorkSheet.Cells.Item(1,3) = "Duration" 
$WorkSheet.Cells.Item(1,4) = "IOSize" 
$WorkSheet.Cells.Item(1,5) = "IOType" 
$WorkSheet.Cells.Item(1,6) = "PendingIO" 
$WorkSheet.Cells.Item(1,7) = "FileSize" 
$WorkSheet.Cells.Item(1,8) = "IOPS" 
$WorkSheet.Cells.Item(1,9) = "MBs/Sec" 
$WorkSheet.Cells.Item(1,10) = "Min_Lat(ms)" 
$WorkSheet.Cells.Item(1,11) = "Avg_Lat(ms)" 
$WorkSheet.Cells.Item(1,12) = "Max_Lat(ms)" 
$WorkSheet.Cells.Item(1,13) = "Caption"

$Results | % { 
    $WorkSheet.Cells.Item($x,1) = $_.Threads 
    $WorkSheet.Cells.Item($x,2) = $_.Operation 
    $WorkSheet.Cells.Item($x,3) = $_.Duration 
    $WorkSheet.Cells.Item($x,4) = $_.IOSize 
    $WorkSheet.Cells.Item($x,5) = $_.IOType 
    $WorkSheet.Cells.Item($x,6) = $_.PendingIO 
    $WorkSheet.Cells.Item($x,7) = $_.FileSize 
    $WorkSheet.Cells.Item($x,8) = $_.IOPS 
    $WorkSheet.Cells.Item($x,9) = $_.MBs_Sec 
    $WorkSheet.Cells.Item($x,10) = $_.MinLat_ms 
    $WorkSheet.Cells.Item($x,11) = $_.AvgLat_ms 
    $WorkSheet.Cells.Item($x,12) = $_.MaxLat_ms 
    $WorkSheet.Cells.Item($x,13) = [string]$_.IOSize + "KB " + [string]$_.IOType + " " + ` 
                                [string]$_.Operation + " " + [string]$_.Threads + ` 
                                " Threads " + [string]$_.PendingIO + " pending" 
    $x++}

$WorkBook.Charts.Add() | Out-Null 
$Chart = $WorkBook.ActiveChart 
$Chart.SetSourceData($WorkSheet.Range("H1:H$x")) 
$Chart.SeriesCollection(1).xValues = $WorkSheet.Range("M2:M$x") 
$Chart.SetSourceData($WorkSheet.Range("H1:H$x")) 
$Chart.SeriesCollection(1).xValues = $WorkSheet.Range("M2:M$x") 
$Chart.Name = "IOPS"

$WorkBook.Charts.Add() | Out-Null 
$WorkBook.ActiveChart.SetSourceData($WorkSheet.Range("I1:I$x")) 
$Chart = $WorkBook.ActiveChart 
$Chart.SeriesCollection(1).xValues = $WorkSheet.Range("M2:M$x") 
$Chart.Name = "MBs Sec"

I don’t know why, but the first Chart doesn’t refresh properly unless I call SetSourceData() and set the SeriesCollection xValues properties twice.  Since the code block that loads the data into the worksheet tracks the rows with the $x variable, the Range for the chart is always row 1 to $x.  The H column happens to be the column that holds the IOPS values and the I column holds the MBs Sec values in the workbook.  The only thing missing at this point a parameter declaration at the head of the script file to require an input parameter $FileName which is passed to Get-Content to read the information.

param(    [Parameter(Mandatory=$TRUE)] 
        [ValidateNotNullOrEmpty()] 
        [string] 
        $FileName)

The end result is a set of charts showing your IOPS and MBs Sec results from the test run.

image

image

The complete script is attached to this blog post.  If you use it let me know your thoughts.

Published Tuesday, May 25, 2010 10:12 PM by Jonathan Kehayias
Filed under: 

 Attachment(s): SQLIOResults.zip

posted by LifeisSimple
2011. 3. 21. 14:36 Brain Trainning/Storage
펌 : http://sqlserverpedia.com/wiki/SAN_Performance_Tuning_with_SQLIO

SAN Performance Tuning with SQLIO

SQLIO is a free utility from Microsoft that measures storage IO performance. The name "SQLIO" is horribly misleading, because it doesn't really have anything to do with SQL Server.

Contents

[hide]

SQLIO Video Tutorial

In this ten-minute video, Brent Ozar explains how to get started with SQLIO, how to create a batch file to test your SAN, and how to analyze the results.

For more video tutorials like this, check out the SQL Server Tutorials page.


Downloading and Configuring SQLIO

[Download SQLIO from Microsoft]

Notice that I said SQLIO, not SQLIOSIM. Experienced database administrators will often direct you to SQLIOSIM because it's easier to use and mimics SQL Server's disk activity patterns. Here's the catch: it won't necessarily test your SAN to its maximum potential. Your SAN team may indicate that if your SQLIOSIM results aren't fast enough, it's a SQL-related problem, not a SAN-related problem. They may use testing utilities from vendors that mimic results closer to what SQLIO will give you. We want to push the SAN's pedal to the metal and find out how fast it'll go in any situation.

After installing SQLIO, edit the param.txt file and change these two parameters:

  • First parameter - the physical location of the testing file. Change the drive letter to point to the SAN drive you want to test, like T:\testfile.dat.
  • Last parameter - the size of the testing file in megabytes. Increase this to 20480 or larger. Ideally, you want it to be larger than your SAN's cache, because your real databases will be larger than the SAN's cache.

After saving param.txt, run this at the command line in the same directory where SQLIO is installed in order to create the test file:

sqlio -kW -s10 -fsequential -o8 -b8 -LS -Fparam.txt timeout /T 10

When it finishes, your test file is created, and it's time to run our real tests.

Testing Your SAN Performance

Instead of picking and choosing individual parameters to use, I like to take the shotgun approach: try every possible combination of random versus sequential, low and high numbers of threads, read versus write, etc. The below batch file will take all of the possibilities and run 'em all. Copy/paste this into a text file called SANTest.bat:

sqlio -kW -t2 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t4 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t8 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t16 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t32 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t64 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t2 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t4 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t8 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t16 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t32 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kR -t64 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o2 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o4 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o8 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o16 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o32 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o64 -frandom -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o128 -frandom -b64 -BH -LS Testfile.dat

sqlio -kW -t2 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t2 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t4 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t4 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t8 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t8 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t16 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t16 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t32 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t32 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kW -t64 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kW -t64 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t2 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t2 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t4 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t4 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t8 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t8 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t16 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t16 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t32 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t32 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

sqlio -kR -t64 -s120 -dM -o1 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o2 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o4 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o8 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o16 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o32 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o64 -fsequential -b64 -BH -LS Testfile.dat
sqlio -kR -t64 -s120 -dM -o128 -fsequential -b64 -BH -LS Testfile.dat

Whew! And that's just one pass - if you want to do multiple passes of the same tests for consistency's sake, like to eliminate the chance that other servers are running on the same SAN and affecting your performance results, you would want to paste that same set of 200+ lines multiple times into your batch file.

Let's take the first line of the batch file and analyze what it's doing:

sqlio -kW -t2 -s120 -dM -o1 -frandom -b64 -BH -LS Testfile.dat

The most important parameters are:

  • -kW means writes (as opposed to reads)
  • -t2 means two threads
  • -s120 means test for 120 seconds
  • -dM means drive letter M
  • -o1 means one outstanding request (not piling up requests)
  • -frandom means random access (as opposed to sequential)
  • -b64 means 64kb IOs

Do a find & replace in your text file and replace -dM with the drive letter of your choice. If you're testing on your S drive, for example, you would replace -dM with -dS.

Then go to the command prompt in the same directory as SQLIO is installed and type:

SANTEST.BAT > RESULTS.TXT

This will run our newly created batch file and dump the results into a text file. This will take a long time, like six hours or more, and it will be hammering your SAN. Don't run this on a production server, and don't even run it on the same SAN as a production server when the production server is under load because it may time out. I've had instances where this batch file has actually caused a SAN to restart itself, so use this with caution - preferably in a test lab or before your SAN goes live.

Hours later, when it finishes, you'll have a RESULTS.TXT file with lots of juicy metrics about your storage performance.

Importing SQLIO Results into SQL Server

Those text file results are cryptic - time to bring them into our favorite data analysis platform, Microsoft Access. Wait, I'm kidding, put the axe down - we'll import them into SQL Server.

Script to Create the Tables and and ETL Stored Procedure

Before we start, create a database that you'll use for SQLIO data storage or designate an existing utility database that you'll use. This script requires SQL Server 2005 or newer, since it uses the varchar(max) field.

In that database, run the below script to create the tables for results storage:

001.SET ANSI_NULLS ON
002.GO
003.SET QUOTED_IDENTIFIER ON
004.GO
005.CREATE TABLE [dbo].[SQLIO_Import](
006.[RowID] [intIDENTITY(1,1) NOT NULL,
007.[ParameterRowID] [intNULL,
008.[ResultText] [varchar](max) NULL,
009.CONSTRAINT [PK_SQLIO_Import] PRIMARY KEY CLUSTERED
010.(
011.[RowID] ASC
012.)
013.GO
014.CREATE TABLE [dbo].[SQLIO_TestPass](
015.[TestPassID] [intIDENTITY(1,1) NOT NULL,
016.[ServerName] [nvarchar](50) NOT NULL,
017.[DriveQty] [intNOT NULL,
018.[DriveRPM] [intNOT NULL,
019.[DriveRaidLevel] [nvarchar](10) NOT NULL,
020.[TestDate] [datetimeNOT NULL,
021.[SANmodel] [nvarchar](50) NOT NULL,
022.[SANfirmware] [nvarchar](50) NULL,
023.[PartitionOffset] [intNULL,
024.[Filesystem] [nvarchar](50) NULL,
025.[FSClusterSizeBytes] [intNULL,
026.[SQLIO_Version] [nvarchar](20) NULL,
027.[Threads] [intNULL,
028.[ReadOrWrite] [nchar](1) NULL,
029.[DurationSeconds] [intNULL,
030.[SectorSizeKB] [intNULL,
031.[IOpattern] [nvarchar](50) NULL,
032.[IOsOutstanding] [intNULL,
033.[Buffering] [nvarchar](50) NULL,
034.[FileSizeMB] [intNULL,
035.[IOs_Sec] [decimal](18, 0) NULL,
036.[MBs_Sec] [decimal](18, 0) NULL,
037.[LatencyMS_Min] [intNULL,
038.[LatencyMS_Avg] [intNULL,
039.[LatencyMS_Max] [intNULL,
040.CONSTRAINT [PK_SQLIO_TestPass] PRIMARY KEY CLUSTERED
041.(
042.[TestPassID] ASC
043.)
044.GO
045. 
046.CREATE PROCEDURE [dbo].[USP_Import_SQLIO_TestPass]
047.@ServerName         NVARCHAR(50),
048.@DriveQty           INT,
049.@DriveRPM           INT,
050.@DriveRaidLevel     NVARCHAR(10),
051.@TestDate           DATETIME,
052.@SANmodel           NVARCHAR(50),
053.@SANfirmware        NVARCHAR(50),
054.@PartitionOffset    INT,
055.@Filesystem         NVARCHAR(50),
056.@FSClusterSizeBytes INT
057.AS
058.SET nocount off
059. 
060.IF @TestDate IS NULL
061.SET @TestDate = Getdate()
062. 
063./* Add a blank record to the end so the last test result is captured */
064.INSERT INTO dbo.SQLIO_Import
065.(ParameterRowID,
066.ResultText)
067.VALUES
068.(0,
069.'');
070. 
071./* Update the ParameterRowID field for easier querying */
072.UPDATE dbo.sqlio_import
073.SET    parameterrowid = (SELECT   TOP 1 rowid
074.FROM     dbo.sqlio_import parm
075.WHERE    parm.resulttext LIKE '%\%'
076.AND parm.rowid <= upd.rowid
077.ORDER BY rowid DESC)
078.FROM   dbo.sqlio_import upd
079. 
080./* Add new SQLIO_TestPass records from SQLIO_Import */
081.INSERT INTO dbo.sqlio_testpass
082.(servername,
083.driveqty,
084.driverpm,
085.driveraidlevel,
086.testdate,
087.sanmodel,
088.sanfirmware,
089.partitionoffset,
090.filesystem,
091.fsclustersizebytes,
092.sqlio_version,
093.threads,
094.readorwrite,
095.durationseconds,
096.sectorsizekb,
097.iopattern,
098.iosoutstanding,
099.buffering,
100.filesizemb,
101.ios_sec,
102.mbs_sec,
103.latencyms_min,
104.latencyms_avg,
105.latencyms_max)
106.SELECT   @ServerName,
107.@DriveQty,
108.@DriveRPM,
109.@DriveRaidLevel,
110.@TestDate,
111.@SANmodel,
112.@SANfirmware,
113.@PartitionOffset,
114.@Filesystem,
115.@FSClusterSizeBytes,
116.(SELECT REPLACE(resulttext,'sqlio ','')
117.FROM   dbo.sqlio_import impsqlio_version
118.WHERE  imp.rowid + = impsqlio_version.rowid) AS sqlio_version,
119.(SELECT LEFT(resulttext,(Charindex(' threads',resulttext)))
120.FROM   dbo.sqlio_import impthreads
121.WHERE  imp.rowid + = impthreads.rowid) AS threads,
122.(SELECT Upper(Substring(resulttext,(Charindex('threads ',resulttext)) + 8,
123.1))
124.FROM   dbo.sqlio_import impreadorwrite
125.WHERE  imp.rowid + = impreadorwrite.rowid) AS readorwrite,
126.(SELECT Substring(resulttext,(Charindex(' for',resulttext)) + 4,
127.(Charindex(' secs ',resulttext)) - (Charindex(' for',resulttext)) - 4)
128.FROM   dbo.sqlio_import impdurationseconds
129.WHERE  imp.rowid + = impdurationseconds.rowid) AS durationseconds,
130.(SELECT Substring(resulttext,7,(Charindex('KB',resulttext)) - 7)
131.FROM   dbo.sqlio_import impsectorsizekb
132.WHERE  imp.rowid + = impsectorsizekb.rowid) AS sectorsizekb,
133.(SELECT Substring(resulttext,(Charindex('KB ',resulttext)) + 3,
134.(Charindex(' IOs',resulttext)) - (Charindex('KB ',resulttext))- 3)
135.FROM   dbo.sqlio_import impiopattern
136.WHERE  imp.rowid + = impiopattern.rowid) AS iopattern,
137.(SELECT Substring(resulttext,(Charindex('with ',resulttext)) + 5,
138.(Charindex(' outstanding',resulttext)) - (Charindex('with ',resulttext)) - 5)
139.FROM   dbo.sqlio_import impiosoutstanding
140.WHERE  imp.rowid + = impiosoutstanding.rowid) AS iosoutstanding,
141.(SELECT REPLACE(CAST(resulttext AS NVARCHAR(50)),'buffering set to ',
142.'')
143.FROM   dbo.sqlio_import impbuffering
144.WHERE  imp.rowid + = impbuffering.rowid) AS buffering,
145.(SELECT Substring(resulttext,(Charindex('size: ',resulttext)) + 6,
146.(Charindex(' for ',resulttext)) - (Charindex('size: ',resulttext)) - 9)
147.FROM   dbo.sqlio_import impfilesizemb
148.WHERE  imp.rowid + = impfilesizemb.rowid) AS filesizemb,
149.(SELECT RIGHT(resulttext,(Len(resulttext) - 10))
150.FROM   dbo.sqlio_import impios_sec
151.WHERE  imp.rowid + 11 = impios_sec.rowid) AS ios_sec,
152.(SELECT RIGHT(resulttext,(Len(resulttext) - 10))
153.FROM   dbo.sqlio_import impmbs_sec
154.WHERE  imp.rowid + 12 = impmbs_sec.rowid) AS mbs_sec,
155.(SELECT RIGHT(resulttext,(Len(resulttext) - 17))
156.FROM   dbo.sqlio_import implatencyms_min
157.WHERE  imp.rowid + 14 = implatencyms_min.rowid) AS latencyms_min,
158.(SELECT RIGHT(resulttext,(Len(resulttext) - 17))
159.FROM   dbo.sqlio_import implatencyms_avg
160.WHERE  imp.rowid + 15 = implatencyms_avg.rowid) AS latencyms_avg,
161.(SELECT RIGHT(resulttext,(Len(resulttext) - 17))
162.FROM   dbo.sqlio_import implatencyms_max
163.WHERE  imp.rowid + 16 = implatencyms_max.rowid) AS latencyms_max
164.FROM     dbo.sqlio_import imp
165.INNER JOIN dbo.sqlio_import impfulltest
166.ON imp.rowid + 20 = impfulltest.rowid
167.AND impfulltest.resulttext = ''
168.WHERE    imp.rowid = imp.parameterrowid
169.AND (SELECT Substring(resulttext,(Charindex('size: ',resulttext)) + 6,
170.(Charindex(' for ',resulttext)) - (Charindex('size: ',resulttext)) - 9)
171.FROM   dbo.sqlio_import impfilesizemb
172.WHERE  imp.rowid + = impfilesizemb.rowid) > 0
173.ORDER BY imp.parameterrowid
174. 
175./* Empty out the ETL table */
176.DELETE dbo.sqlio_import
177. 
178.SET nocount off
179.GO

The script creates three things:

  • A table called SQLIO_Import where our results will first be dumped from the text file before they're processed
  • A table called SQLIO_TestPass where our results will be permanently stored in a report-friendly format
  • A stored procedure called USP_Import_SQLIO_TestPass. The stored procedure takes the data from our imported text file, parses it, and inserts records into the SQLIO_TestPass table.

The stored procedure expects these parameters, which it uses when inserting into SQLIO_TestPass. None of these have to be formatted a specific way - as long as they fit the SQL Server field definitions, they're fine:

  • @ServerName NVARCHAR(50) - the name of the server you're using for testing. This is mostly useful for servers with locally attached storage as opposed to SAN storage.
  • @DriveQty INT - the number of drives in the array you're testing.
  • @DriveRPM INT - you might be testing 10k and 15k variations of the same setup. I suggest lumping drives into categories - don't try to differentiate between drives that report 10,080 RPM or other odd numbers - just stick with 5400, 10000, 150000, etc. For SSD, I prefer to use a number for the generation of SSD, like 1 or 2.
  • @DriveRaidLevel NVARCHAR(10) - raid 5, raid 0, raid 10, raid DP, etc. (Yes, there are vendor-specific RAID implementations that use letters instead of numbers.)
  • @TestDate DATETIME - the date you're running the tests. I include this as a parameter because sometimes I've run the same tests on a quarterly basis and I want to track whether things are changing over time.
  • @SANmodel NVARCHAR(50) - the type of SAN, such as an IBM DS4800 or EMC CX300.
  • @SANfirmware NVARCHAR(50) - the version of firmware, which can impact SAN performance.
  • @PartitionOffset INT - Windows systems can use DISKPART to offset their partitions.
  • @Filesystem NVARCHAR(50) - usually NTFS. Can be used to track testing different filesystems.
  • @FSClusterSizeBytes INT - the file system cluster size.

Now that our framework is in place, let's import our first round of results.

Importing the Text File into SQL Server 2005

  • In SQL Server Management Studio, right-click on the database where you want to store the SQLIO performance data and click Tasks, Import Data.
  • For Data Source, choose "Flat File Source". Browse to your results.txt file, and set the Format to Delimited, Text Qualifier to None, Header row delimiter to {CR}{LF}, and Header Rows to Skip to 0.
  • Click on the Advanced tab on the left, and there should only be one column, Column 0. Set the DataType to text stream. Click Next.
  • Your database server and storage database should be shown. Click Next.
  • For the Destination Table, choose SQLIO_Import and click Edit Mappings. Set the Column 0 destination to be ResultText. Click OK, and click Next.
  • Click Next until the wizard finishes and imports the data, and then close the wizard.

Open a new query and verify that the data was successfully imported by typing:

1.SELECT * FROM dbo.SQLIO_Import

If there's no rows, something went wrong with the import process. Stop here and troubleshoot. Otherwise, execute the stored procedure to move the data into the reporting table, like this:

01.EXECUTE [dbo].[USP_Import_SQLIO_TestPass]
02.'MyServerName'
03.,10
04.,15000
05.,'RAID 10'
06.,'2008/5/6'
07.,'IBM DS4800'
08.,'6.62'
09.,1024
10.,'NTFS'
11.,'64000'

The data will be parsed and inserted into the reporting table, which we can query:

1.SELECT * FROM dbo.SQLIO_TestPass

Now, the data is arranged in a way that's easier for reporting.

Analyzing the SQLIO Results

Start by analyzing the data to find the fastest SAN throughput:

1.SELECT * FROM dbo.SQLIO_TestPass ORDER BY MBs_Sec DESC

That column is the SAN's throughput. Notice the LatencyMS_Avg column, though, which indicates the milliseconds of latency. A high throughput number is not necessarily good if the system is taking a long time to respond. Look for five or so of the highest throughput numbers that represents a good balance of high throughput and low latency.

Then, look at the parameters that were used in order to achieve that throughput. Make a note of these, because if you need to do repeated throughput testing on your SAN to test different configurations, you can use these high-throughput configurations to run a much shorter testing pass. Instead of testing your SAN overnight, you can get a quick idea in a matter of minutes because you'll know what parameters tend to drive very high throughput on your SAN. You'll still want to test all parameters when possible, because changes to the SAN may affect how it handles other parameters, but this will give you a quick estimate of whether things are getting better or worse.

More Reading About SQLIO

Here's a few interesting links about other bloggers' experience with SQLIO:

posted by LifeisSimple
2010. 11. 17. 16:26 Brain Trainning/Storage

Capacity-Optimized Nearline SAS Drives Challenge SATA

Back to Page 1

Solid Investment Protection

Capacity-optimized SAS drives are supported with a strong roadmap to help assure organizations that capacity-optimized SAS drives can effectively keep pace with their data growth. With data retention requirements growing exponentially, capacity-optimized SAS drives are set to release increasing capacities over the coming years. Today’s capacities for 3.5-inch drives range from 250GB to 2TB. By the end of 2010, 3TB drives can be expected in the 3.5-inch form factor. With the growing popularity of 2.5-inch drives in the enterprise, these capacities are also increasing. In 2010, 2.5-inch drives are expected to offer capacities of up to 1TB. Contributing to this growth is the widespread adoption of SAS in the enterprise. IDC predicts that 70 percent of all enterprise drive shipments will be SAS by 20111 (IDC Worldwide Hard Disk Drive 2009-2012 Forecast: Navigating the Transitions for Enterprise Applications IDC Doc #216394 February 2009). While the majority of these drives today are of the 6Gbps SAS generation, the SCSI Trade Association has already added 12Gbps solutions to the industry roadmap for the 2013 timeframe.


SAS Roadmap

Low Power Consumption

One of the reasons that 2.5-inch drives have exploded onto the scene is the concern for energy efficiency in the data center. Small Form-factor (SFF) drives conserve energy by reducing power and cooling on average up to 70 percent over traditional 3.5-inch drives. In addition to lowering the Watts/GB, the smaller footprint allows more drives per system for greater performance density and redundancy.

Conclusion

The need for capacity-optimized drives is clear, based on the markets’ increasing need to store their secondary data cost-effectively. While SATA technology has been the leading option for years as the only capacity-optimized drive in the market, the introduction of capacity-optimized SAS drives provide multiple benefits and compelling reasons to be considered as the capacity-optimized drive-of-choice. SAS is designed and engineered for high availability, enterprise-class data storage where performance, reliability, and data integrity are paramount. With the introduction and adoption of SAS in the traditional capacity-optimized market segment, customers can now be confident that their secondary storage also meets the demanding criteria that enterprises demand.

Gina Geisel is the product marketing manager for the SAS-based storage arrays of LSI’s Engenio Storage Group. In her current role, Gina is responsible for generating collateral material, positioning literature and sales tools which are leveraged by LSI as well its OEM and channel partners. Gina joined LSI in 2006 and has more than nine years of storage experience.

John Szantho has been in product management within LSI’s Engenio Storage Group for more than six years and has more than ten years of storage industry experience. As the product manager for hard disk drives, SSDs and storage enclosures, John is responsible for helping define product roadmaps, developing market and product requirements, and championing products through development.

Barbara J. Craig is a senior product marketing manager on Seagate’s Enterprise Storage team. With more than 30 years in technology, she has spent the last 13 years focused on storage and storage networking. She attended the State University of New York, Pepperdine University, and Vanderbilt University.

Back to Page 1

Follow Enterprise Storage Forum on Twitter.

Tags: SATASASHDDHard Disk DriveSecondary Storage

 1 2

posted by LifeisSimple
2010. 11. 17. 14:32 Brain Trainning/Storage

Capacity-Optimized Nearline SAS Drives Challenge SATA

Data continues to be the lifeblood of many organizations. The unprecedented increasing demand to store that data is irrefutable. In a recent study, IDC predicted that hard disk drive (HDD) shipments for enterprise applications will increase from 40.5 million units in 2009 to 52.6 million units in 2014. In the same study, IDC also predicted the HDD industry “will ship more Petabytes for enterprise applications in the next two years than it did in the preceding 20 years.”

The demand for storing data spans across different applications and environments as well – each with their own unique data requirements. This data includes business-critical data that must be quickly accessible; highly sensitive data that must be secured; and secondary data that must be retained, but is infrequently accessed and for which cost containment is key. As such, the storage industry has responded with multiple drive types and tray-level drive intermixing to cost effectively meet application, rack, and energy efficiency requirements within a single storage system.

Demand

The demand for capacity-optimized drives, where capacity and cost rule over performance, has contributed much of the growth in overall drive sales. Between 2009 and the first half of 2010, capacity-optimized drives experienced 30.5 percent year-over-year growth (IDC Worldwide HDD Shipments and 4 Quarter Outlook by Segment – Final C2Q Results). IDC also forecasts that by 2011, more than 70 percent of the terabytes shipped into enterprise organizations will be in capacity-optimized drives. This increase in demand for capacity-optimized drives is due to multiple factors:

The continuing increase in digital content and media. From consumers to the enterprise, new media, such as YouTube and Facebook, are driving production of digital applications, such as video and digital images, that require large amounts of data to be stored on capacity-optimized drives.

Disk-to-disk backup. Disk-to-disk backup is a cost-effective implementation because it eliminates the need for tape media and reduces management costs. In regards to the recovery side, disk-to-disk backup is very effective in enabling organizational productivity by reducing recovery times and application downtime.

Data retention. Organizations are retaining data longer than they may have in the past due to industry regulations as well as their own compliance standards. These organizations are finding that cost-per-GB is decreasing, so they retain their data longer because it is more cost-effective.

Virtualization. Virtualization is now considered a mainstream technology for improving efficiency and resource utilization. There is a strong correlation between virtualization and storage as growing server demands fuel storage growth. As virtualization grows, so does storage.

For the most part, Serial ATA (SATA) drives have been the answer to meeting the requirements of secondary data by offering high capacity at a low price tag. For years, organizations have relied upon SATA drives to cost-effectively store the bulk of their non-business-critical data primarily because it was the only drive type offering these characteristics. While SATA has been sufficient to date, another serial-based drive technology is making inroads that can satisfy the same data requirements but with improved performance, data integrity, and reliability – all without additional cost. This drive technology is Serial Attached SCSI (SAS). Already well established in high-performance drive markets, SAS is now penetrating in secondary storage tiers with capacity-optimized drives.

Benefits

The market is ripe for the advantages and value that capacity-optimized SAS drives have introduced. These drives provide numerous benefits that strongly position them as the capacity-optimized enterprise drive of the future from the entry-level to the enterprise. Nearline SAS drives offer the following benefits:

Cost-Effective Drives

While capacity-optimized SAS drives offer multiple improvements and new functionality over SATA drives, a higher price tag might be expected; however, this is not the case. The adoption of SAS has driven costs down, and SATA drives and SAS drives now have a minimal cost difference. At the drive level itself, based on current street pricing, there is minimal price premium for capacity-optimized SAS.

Capacity-optimized SAS drives also contribute to lower costs at the system-level. By removing the requirement for additional interposer hardware that is needed for storage systems to house SATA drives, not only is an additional cost eliminated, but reliability is improved, with fewer components that could potentially fail.

Storage Performance Council (SPC) Benchmark Results

SPC-2 Results. Although capacity and price are often the leading decision factors for capacity-optimized drives, higher performance and quicker access to data are very desirable features. Capacity-optimized SAS drives offer a significant improvement in performance over SATA drives in both throughput and IOPs primarily due to their full duplex, bi-directional I/O capabilities. Published Storage Performance Council (SPC) benchmark results demonstrate this feature with up to 64 percent improvement in the SPC-2 benchmark (based on multiple workload testing).

SPC-1 Results. Capacity-optimized SAS drives perform 50 percent to 64 percent better than SATA drives and result in up to a 37 percent improvement in the SPC-1 benchmark (primarily based on applications focused on IOP operations). Capacity-optimized SAS drives also offer the best price (based on current list pricing) and performance drive for large, random-based files. If response time is important, capacity optimized drives also provide an 18 percent to 37 percent better response time than SATA drives.

Reliable/High Data Integrity

Failure is no longer an option in today’s relentless storage environments because organizations rely on information to remain productive and competitive. Like SATA drives, capacity-optimized SAS drives deliver a low Mean-Time-Between-Failure (MTBF) of 1.2 million hours and an Annualized Failure Rate (AFR) of under 0.73 percent, providing the reliable data access demanded of enterprise storage solutions today. Additionally, SAS drives deliver data integrity by reducing data corruption with end-to-end initiator-target nexus checking (IOECC/IOEDC), which helps ensure that data traveling to or from the drive is not misdirected. SATA drives lack the native addressability required to perform this function. SAS drives help decrease storage system failure rates by reducing the number of physical interconnects and adding dual-port redundancy for fail-over while eliminating a single point of failure of the additional hardware for SATA that is no longer needed for capacity optimized SAS drives.

With future support for T10 PI (Protection Information), capacity-optimized SAS drives will be a key component of an industry standard for end-to-end data integrity. This improves reliability by reducing silent data corruption events, detecting data corruption before data is written to disk, as well as isolation of the device causing data corruption.

Secure

All drives will inevitably be out of an IT administrator’s control, through theft, off-site service or repair, or disposal of old drives. Offered with drive-level encryption, self-encrypting SAS drives (SEDs) provide for comprehensive data security that protects data throughout the drive’s life cycle without sacrificing storage system performance or ease of use.

Page 2: Capacity-Optimized Nearline SAS Drives Challenge SATA

Tags: SATASASHDDHard Disk DriveSecondary Storage

1 2 

posted by LifeisSimple
2010. 11. 17. 11:44 Brain Trainning/Storage
출처 : http://blog.naver.com/p1ngp1ng?Redirect=Log&logNo=120035225799

DAS [Direct Attached Storage]

서버와 전용 케이블로 연결한 외장형 저장 장치서버/클라이언트 환경에서의 부족한 저장 공간을 가장 쉽게 확보하는 방법으로 서버 자체에 물리적으로 외부 저장 장치를 연결하는 것이다네트워크에 연결된 각 서버에 외부 저장 장치를 추가함으로서 필요한 데이터를 물리적으로 가까운 곳에서 접근할 수 있고 확장이 용이하다하지만 데이터의 증가에 따른 외부 저장 장치의 계속적인 추가는 서버의 효율성을 저하시키는 문제가 있다또 다른 문제는 네트워크상의 서버가 다운되는 경우에는 중지된 서버에 장착된 저장 장치도 사용할 수 없게 되어 중앙 집중식 시스템과 같은 취약점이 있다.
 

 

NAS [Network Attached Storage]

File Server는 파일공유와 파일 서비스라는 서버로서의 기능으로부터 시작된 솔루션이다네트워크가 발달하지 못하고 데이터 양이 많지않던 시절에 부서별로 파일공유를 위해 또는 파일에 대한 관리의 편의를 위해 시작되었던 솔루션으로 파일공유 및 파일 서비스 기능을 위해서는 범용OS (Unix 또는 Windows NT)에서 제공되는 일부분의 기능(NFS 또는 CIFS)을 이용하였고 데이터 저장장치는 주로 서버에 내장된 디스크를 사용하였다그리고 서버/클라이언트 구조로 파일서버가 서버로서의 역할을 각 End-user의 단말(PC또는 Workstation)이 클라이언트로서의 역할을 하도록 구현되었다그런데 세월이 흐르면서 사용자 환경이 변하였다파일공유 및 파일 서비스를 위한 데이터 용량이 폭증하였다는 점변화는 파일공유 및 파일서비스를 위한 I/O가 보다 높은 대역폭과 속도를 요구한다는 점이다.
파일서버의 한계를 극복한 것이 NAS(Network Attached Storage)이다. NAS는 저장장치의 기능을 강조한 것으로 저장장치 부분의 하드웨어적 성능/기능뿐 아니라 소프트웨어적 기능이 예전의 파일서버와는 차별화 되었다그리고 I/O측면에서도 범용 OS대신에 파일서비스에 특화된 전용의 OS를 채용함으로써 보다 나은 I/O 성능을 제공하고 있다그리고 역할에 있어서도 기존의 파일서버가End-user 단말에 대한 파일서비스를 제공하는 역할을 강조한 반면 NAS End-user단말에 대한 기존 파일서버의 역할뿐만 아니라 애플리케이션 서버의 데이터를 네트웍(LAN)을 통해 저장하여 네트웍이 연결된 곳에서는 언제 어디서라도 스토리지를 접속해서 사용할 수 있는 애플리케이션 서버에 대한 저장장치로서의 역할도 하고 있다
NAS를 이름 그대로 해석해 보면 네트웍(LAN)에 접속된 스토리지이다과연 스토리지를 LAN에 붙일수 있을까말을 바꾸어 보자스토리지는 SCSI 프로토콜을 기반으로 통신을 하고 LAN TCP/IP 프로토콜을 기반으로 통신을 한다.
 


 

NAS 장점은 파일공유다여러 애플리케이션 서버들이 LAN 통해 NFS또는 CIFS 같은 파일 서비스 프로토콜로 전용파일서버에접속하여 파일에 대한 서비스를 요청하면 단일 파일서버가  요청에 따라 파일서비스를 하게 되므로써  NAS 저장된 파일이 모두 전용파일서버 한곳에서 관리됨으로써 파일들에 관한 정보들의 Consistency라든가 locking 문제가 없이 파일을 여러 서버들이공유할  있게 된다.
NAS의 단점은 성능과 DB에서 사용할 때의 문제점이다성능상의 단점중의 한 요인은 Latency Time이다. NAS는 애플리케이션 서버에서 전용파일서버까지 네트웍으로 접속되고 전용파일서버에서 스토리지사이는 채널로 접속되어 채널로만 접속되는 DAS또는SAN에 비해 접속단계가 늘어남으로서 Latency Time이 더 걸리게 된다물론 NAS LAN에서의 Latency time에서 단점을 Cache에서 그리고 저장장치 부문에서의 성능으로 보충하여 JBOD(Just Bunch of Disk: 저급의 디스크 스토리지를 의미)나 성능이 떨어지는 내장형 Disk보다 빠를 수 있다그러나 동급의 디스크 스토리지로 비교했을때는 DAS SAN보다 성능이 조금 떨어지는 것이 보통이다.특히 I/O가 많은 대용량의 DB인 경우그리고 대규모 Batch Job을 수행해야 하는 경우에는 이와같은 성능차이가 문제가 된다또한DB의 경우 전용 파일서버에서의 Caching기능 때문에 전용파일서버에 장애가 일어난 경우 Data consistency가 문제가 될 수도 있다.

SAN [Storage Area Network]

'광저장장치영역네트워크'로 번역되고 '스토리지 에어리어 네트워크'라고도 한다특수 목적용 고속 네트워크로서대규모 네트워크 사용자들을 위하여 서로 다른 종류의 데이터 저장장치를 관련 데이터 서버와 함께 연결해 별도의 랜(LAN:근거리통신망)이나 네트워크를 구성해 저장 데이터를 관리한다.
정보기술(IT)이 급속히 발전하면서 기업들의 가장 큰 고민 가운데 하나는 많은 데이터를 어떻게 효율적으로 저장할 수 있는가 하는 것이었다기존저장 방법은 장비에 스토리지를 붙여서 쓰는 DAS(direct attached storage:직접연결스토리지)를 이용하였으나저장할 데이터와 늘어나는 데이터가 한 공간에 존재하므로 데이터의 전송 속도가 떨어지는 단점이 있다. SAN은 이러한 단점을 극복하기 위한 목적으로 1990년대 말부터 개발되기 시작해 채 몇 년도 안 되어 새로운 데이터 저장기법으로 떠올랐다서로 다른 종류의 저장장치들이 함께 연결되어 있어 모든 사용자들이 공유할 수 있을 뿐 아니라백업·복원·영구보관·검색 등이 가능하고한 저장장치에서 다른 저장장치로 데이터를 이동시킬 수 있다는 장점이 있다. SAN 외에 별도로 랜이나 네트워크를 구성해 저장 데이터를 관리하는 방법으로 NAS(network attached storage:네트워크연결스토리지) 등이 있지만, 2002년 현재 SAN 기법이 보편화되어 시장의 50%이상을 차지하고 있다더욱이 갈수록 대형화하면서 고성장세를 보이고 있다.
하지만 SAN 을 구축하기 위해서는 NAS 스토리지에 비해서 많은 비용과 장비들의 투자가 필요하고기존 시스템들의 업그레이드가 필수적이므로 몇가지 제약이 있다.
SAN 을 이기종간의 여러 서버에서 하나의 스토리지를 공유하기 위해서는 SAN 메니지먼트 소프트웨어가 별도로 필요로 하고 , NAS와는 달리 SAN 네트워크를 별도로 구축을 해야 한다는 단점이 있다.
SAN은 서버와 스토리지 사이의 채널 접속에 파이버 채널 스위치를 넣어 네트웍의 개념을 도입한 것이다그렇다면 왜 SCSI Switch가 아닌 파이버채널 스위치인가? SCSI의 경우 Open System의 채널 인터페이스이긴 하지만 접속 거리가 최대 25m로 네트웍으로 구성하기에는 거리제약이 있으며 스위칭을 위한 고려가 전혀 되어있지 않는 인터페이스란 점 때문에 파이버 채널을 SAN의 표준으로 정하게 되었다.
파이버 채널 스위치를 중간에 넣음으로서 서버의 접속 포트 하나에서 여러대의 스토리지를 접속할 수 있고 또한 스토리지의 접속 포트 하나에 여러 서버가 접속할 수 있는 유연성이 생기게 된다그러나 여러 서버에서 파일 공유를 하려는 측면에서 생각해 보면 동일 파일 시스템에 대한 관리를 각각의 서버에서 해야 하기 때문에 Locking 문제와 Consistency 문제가 생기게 되고 그런 이유로 파일공유가 되지 않는다그렇다면 SAN에서 말하는 공유는 무엇일까그것은 지금현재로는 서버측면에서의 스토리지 공유 또는 스토리지 측면에서의 서버 공유를 의미할 뿐이다물론 SAN에서 궁극적으로 추구하는 목표에는 파일시스템의 공유가 포함되어 있으며 그러한 노력이 현재 진행되는 있는 것은 사실이지만 파일시스템의 공유라는 목표를 달성하기에는 아직도 많은 시간이 필요하리라고 생각된다
 

 

비교자료

NAS File Server 비교

 
NAS
File Server
관점
스토리지
서버
역할
파일서버스토리지로서의 역할
파일서버 역할
저장장치형태
전용 OS
범용 OS
가용성
저장된 정보의 무중단 활용 측면에서의 가용성을 중시
정보 보호 보다는 파일서비스와 파일공유 기능에 중점
파일서버 성능
보통
데이터 용량
500GB 이상
200GB이하
파일 서비스를 위한 프로토콜
NFS, CIFS 동시 지원
사용 OS에 따라 NFS, CIFS 지원

NAS SAN 비교

 
NAS
SAN
구성요소
어플리케이션 서버전용 파일 서버스토리지
어플리케이션 서버스토리지
접속장치
LAN 스위치
Fibre Channel 스위치
스토리지 공유
가능
가능
파일시스템 공유
가능
불가능
파일시스템 관리
파일서버
어플리케이션 서버
접속 속도 결정 요인
LAN과 채널 속도에 좌우됨
채널 속도에 좌우됨
비고
파일공유를 위한 전통적 솔루션
유연성확장성편의성이 가장 뛰어난 구성

DAS, NAS, SAN 비교

 
DAS
NAS
SAN
구성요소
어플리케이션 서버스토리지
어플리케이션 서버전용파일 서버스토리지
어플리케이션 서버스토리지
접속장치
없음
이더넷 스위치
파이버채널 스위치
스토리지
공유
가능
가능
가능
파일시스템
공유
불가능
가능
불가능
파일시스템
관리
어플리케이션 서버
파일 서버
어플리케이션 서버
접속 속도
결정요인
채널속도
LAN과 채널속도
채널속도
특징
소규모 독립된 구성에 적합
파일 공유를 위한 가장 안정적이고 신뢰성 높은 솔루션
유연성/확장성/편의성이 가장 뛰어남

 

용어 : Fibre Channel

SAN(Storage Area Network)에 사용되는 표준화 채널로서 FWU(Fast Wide Ultra) SCSI의 뒤를 이을 차세대 고속 인터페이스다. 1992년에 휼렛패커드와 선마이크로시스템스•IBM  3개 업체가 FCSI(Fibre Channel Systems Initiative)를 구성했고, IP SCSI의 공조에 관심을 두고 표준화 작업을 진행하여 1998년에 표준안이 마련되었다. 2003년 현재 FCA(Fibre Channel Association) FCLC (Fibre Channel Loop Community)에서 관리한다.
FWU SCSI보다 4∼5배 빠른 1Gbps의 속도로 데이터를 전송할 수가 있으며사용되는 프로토콜이 SCSI와 호환되어 기존의 장치를 그대로 사용할 수 있는 장점이 있다또 거리 제한도 거의 없어 서버나 JBOD(Just a Bunch Of Disks; RAID 기능이 없는 하드디스크드라이브 세트)를 데이터센터에서 최대 8.6㎞나 떨어진 곳에 설치할 수도 있다설치 거리는 점차 늘어날 것으로 보인다.
이 기술로 백본랜을 구축할 경우수백 테라바이트의 용량을 보유한 중앙집중식 저장장치 팜(Farm)을 구축할 수 있고재해 복구를 위한 외부의 미러링도 가능하다클라이언트/서버 컴퓨팅에서 요구되던 분산 아키텍처를 지원하고폴트톨러런스(Fault Tolerance)와 자체 복구 기능도 지녀 네트워크 연결을 유연하게 해 준다네트워크 확장을 거의 무한대 수준으로 끌어올리는 등 네트워크의 성능 저하를 최소화시킨다이와 같은 장점을 지녀 2003년 후반부터는 SCSI 기반의 인터페이스와 저장장치를 대체할 것으로 예상되며, SCSI IDE(Integrated Drive Electronics)와 같이 저가의 데스크톱 컴퓨터용으로 자리잡을 것으로 전망된다.

[출처] DAS, NAS, SAN|작성자 피비티

posted by LifeisSimple
2010. 11. 10. 14:41 Brain Trainning/Storage

SQLIO Tips and Tricks for LOB Disk Performance Testing

RATE THIS

At the request of one of our blog readers, I wanted to share several tips, tricks, and other basic considerations useful to our technology community (for my earlier post on using SQLIO to baseline performance, see http://blogs.technet.com/b/lobapps/archive/2010/09/03/using-microsoft-sqlio-to-baseline-lob-application-disk-performance.aspx). I’ve grouped these into Use Cases and Runtime Practices.

   

SQLIO Use Cases

   

You might wonder if SQLIO is only supported on systems running Microsoft’s Operating Systems or SQL Server databases. In short:

  

  • SQLIO does indeed require a Windows-based host from which to execute. An inexpensive client device is fine in this regard, though. That is, any appropriately configured desktop, laptop, or server that is connected to the set of disks you’re testing is all that’s required. The client device simply needs to “see” the various LUNs configured on the disk subsystem, i.e. mapped drives.
  • Thus, SQLIO can be used to baseline a disk subsystem that will ultimately be deployed in a Unix, Linux, or Windows environment.
  • Of course, you can’t expose the drive in such a way unless it’s formatted for NTFS…begging the question “why use SQLIO to test this disk subsystem?” when you’re faced with re-presenting and re-formatting LUNs just for testing. In that case, there are other tools you might consider. But once you’re comfortable using SQLIO – a fast, predictable, and very easy method of baselining disk performance – you might find the couple of extra steps not only worth your time but also a way to give you some extra peace of mind. 

 

I also like using SQLIO to not only test but compare disk subsystems to one another. These disk subsystems could eventually host Windows-based Oracle or DB2 databases as well as SQL Server.

   

·        SQLIO can be configured to “emulate” (I’m using this term really loosely) any RDBMS at a low level in that SQLIO can be easily configured to read or write in various block sizes. SQL Server 6.5, anyone? Configure SQLIO to test 2kb blocks. Basic SQL and Oracle testing? Go with 8kb blocks. Want to simulate big-block performance (for backup/restore activities, for example)? Configure for 64kb, 256kb, 1MB, you get the picture….

·        The idea is to quantify what the disks are capable of pushing or supporting at max, knowing that your Oracle, DB2, or SQL databases won’t achieve this number (due to overhead and a whole slew of other factors).

·        Knowing these maximum numbers will give you the ability to compare disk subsystem changes in an apples-to-apples kind of way. Got a disk controller firmware upgrade coming up? Validate the impact the FW change has (before upgrade vs after upgrade) to get a sense of whether your SQL or Oracle database performance might be impacted.

·        Want to compare your how disks hosted on Microsoft Azure or Amazon EC2 (S3, that is) compare to your current SAN configuration? Run SQLIO against them (keeping in mind that you ALSO really need to do other cloud performance and throughput testing out of the scope of this blog posting…).

·        Also - and this is hugely important - tests using tools like SQLIO are naturally akin to "smoke testing" tools rather than tools useful for testing online/interactive user performance. Sure, you can change queue depth related to threads (explained in my previous SQLIO post), but this may or may not emulate users well. So when you're looking at your SQLIO test results, think about how those results might reflect hard-running SAP batch processes rather than online SAP users..... 

 

I also find SQLIO useful to compare different database designs against one another. Many of my older SAP-on-SQL customers have deployed R/3 or ECC across 3 LUNs or datafiles (the SAP/SQL installation default), for example, while others have chosen 4, 6, 8, 12, 16, and other numbers. While SQLIO can easily be setup to execute against a single data file (to make for really easy testing), such an approach does not reflect real-world LUN configurations. Thus, SQLIO’s ability to execute tests against multiple data files makes for not only a better baseline test, but a great way to compare the maximum throughput of different configurations.

  

SQLIO Runtime Practices

   

Manage your RUNSQL.BAT files. A typical batch file used to execute SQLIO might look like this: sqlio -Fpfile.txt –b8 -o32 –kR -frandom –s60 (execute sqlio, use a parameter file named pfile.txt to point to your datafile or datafiles, use an 8kb block size, use 32 threads [by the way, you will want to reduce this to 4, 8, or 16 if you're running 16 or more datafiles], execute read tests, make these random read tests, and run the test for 60 seconds). Oftentimes I’ll combine 4 of these lines into a single RUNSQL.BAT file (one to do random reads, another for sequential reads, one for random writes, and another for sequential writes), and create different versions based on other factors. For example, I often create a collection of RUNSQL.BAT files configured to execute 8kb reads and writes, and another collection for 64kb. And if I’m testing different SQL datafile configurations (i.e. 3 datafiles vs 6 vs 16), I’ll create a BAT file for each use case.

 

Avoid busy times when testing. In the best of cases, you want to test your new disk subsystem (or configuration change) while no one or nothing else is accessing it. In the real world, this isn’t always possible. Running SQLIO is EXTREMELY disk intensive, though, so execute it with care when other users are on the system – they WILL notice. And keep in mind that if you’re running SQLIO while another load is concurrently using the disk system, your results will be skewed (lower!), perhaps much lower than what you would have observed otherwise. So do what you can to test off-hours, preferably during a scheduled downtime window or on another non-production disk subsystem identically configured to production (smart customers deploy “staging” or “pre-production” systems to support this kind of promote-to-production change management and testing).

 

Standardize your data file size. I started using SQLIO back in 1997, when 100GB SAP databases were considered pretty sizable. During that time, I “standardized” my SQLIO testing around creating 5GB data files. Standardizing in this way helped me compare the test results from new customers with those I obtained from previous customers, making it easy for me to compare various disk configurations (hardware models, LUN/group layouts, RAID selections i.e. RAID 1 vs 10 vs 5, and so on). If you’re going to start using SQLIO and see yourself using it for a while, make sure you go with the same size (and number, if applicable!) of data files. Edit the pfile.txt configuration file to reflect the various data file locations and sizes – one line in the configuration file per data file to be used in the testing.

 

Pipe the output. Instead of running run.cmd from a command line and manually recording the output statistics from each run, pipe the results to an output file which is not only saved to disk but can be later dumped into Excel for some real analysis. For example, from the command line you might enter “run.cmd > output-run1.txt” to dump the results from the individual performance tests specified in your run.cmd into a file called output-run1.txt. On the other hand, you might dump all test results into the same output file (which is both possible and convenient, given that “piped” test run output automatically appends itself to any content already present in the output file).

 

Test each use case twice or more. Don’t just test once and be done – execute each SQLIO test at least twice to ensure the results are pretty consistent (especially useful if you’re running on a disk subsystem currently being used by other users or processes). And don’t ever take the test results that immediately follow the datafile creation process – it’s naturally (and greatly) skewed towards lower performance.

   

Don’t forget to test your Log file LUN. Oftentimes we get so focused on verifying our datafiles are optimally configured for performance that we forget about testing the LUN(s) used to host the SQL logs (or Oracle redo and archive logs….). Test them all (using the appropriate block size, i.e. 4kb). And if you really want to do a thorough job, test the LUNs hosting your Windows pagefiles. 

  

Corroborate your results. Like I’ve written in other posts, I’m a big believer in using multiple tools to help ensure my results are fairly consistent. In the past, if I was using a tool like SQLIO, I might also employ NTiogen, SQLIOstress, WinBench, Iometer, or another tool just to give me a bit more confidence in the overall numbers. The testing process then becomes something akin to executing a suite of tests. This seemed especially important when it came to testing completely different kinds of disk solutions (such as local drives vs SANs, or network appliances vs Amazon S3 vs Azure or other cloud hosting models). The more consistency in your output, the more sound your results and recommendations... which should ultimately translate into improved real-world application performance.    

 

As tempting as it may be, don't jump to conclusions too quickly! Once you've started using SQLIO, you'll be tempted to quickly start comparing the results of one test with the results of others. If you've only changed ONE factor between tests, feel free to draw all the conclusions you like. But if you start testing different SANs or disk subsystems against one another, you'll need to keep in mind that things like cache size, cache configuration, the number and speed of HBAs, underlying RAID heuristics, subtle differences in disk controllers, switches, and disk drives, and so much more may (and do!) have potentially a very dramatic effect on your test's results.   

  

I’m sure there’s more….so stay tuned and check back on this posting every now and then. As I think of other SQLIO tips and tricks I’ll continue to append them here…

 (added content 10/3 and 10/7) 

Have a great week,

   

george


출처 : http://blogs.technet.com/b/lobapps/archive/2010/09/27/using-sqlio-to-baseline-performance-tips-and-tricks.aspx


'Brain Trainning > Storage' 카테고리의 다른 글

Capacity-Optimized Nearline SAS Drives Challenge SATA Part-1  (0) 2010.11.17
DAS, NAS, SAN  (0) 2010.11.17
Testing Windows IO with SQLIO and SysBench  (0) 2010.11.10
SQLIO 관련 페이지  (0) 2010.11.10
디스크타입별 IOPS  (0) 2010.10.17
posted by LifeisSimple
2010. 11. 10. 14:39 Brain Trainning/Storage

http://www.apacheserver.net/Testing-Windows-IO-with-SQLIO-and-SysBench-i29230.htm

Testing Windows IO with SQLIO and SysBench

To benchmark IO on Linux and MySQL transaction processing, SysBench is a popular choice that can do both. After poking around at the source code, it seems PostgreSQL and Oracle are also included for transaction processing testing if you have the proper header files, but I didn't test those.

To benchmark IO on Windows and SQL Server transaction processing, Microsoft provides two tools, SQLIO and SQLIOSim. SQLIO is a misnomer in that it really doesn't have much to do with SQL Server. It is a general purpose disk IO benchmark tool.

So today I was playing with SysBench and noticed that I can compile and build it on Windows as well. I decided I should run IO benchmark on a single machine with both tools (SQLIO and SysBench), and see if I could reconcile the results.

To make things simple, I thought I would just benchmark random read of 3G (orders of magnitude bigger than disk controller cache) files for 5 minutes (300 seconds) with a single thread using 16Kb block size, without adding any additional queue. I tested this on both my laptop and an Amazon EC2 instance. The commands for both tools are listed below, and they should perform the same thing, as far as I can tell. Let me know if you have any comments/pointers or if I missed anything.

SysBench commands:

sysbench --test=fileio --file-total-size=3G prepare
sysbench.exe --test=fileio --file-total-size=3G --file-test-mode=rndrd --max-time=300 run

Fro SQLIO, here is the line in param.txt and command used:

c:\testfile.dat 1 0x0 3072
sqlio -kR -s300 -dc -b16 -frandom -Fparam.txt

As this is a quick test, I ran the same test twice and took the average value for comparison purposes. The detailed output is pasted at the end of this post.

On my Windows XP Pro Service Pack 2 laptop with Intel X-25 SSD:

IO/SecondThroughput/Second
SQLIO 3833.5 59.90Mb
SysBench 3390.77 52.98Mb

So on my laptop, SQLIO's results are 13% higher than that of SysBench.

On Amazon EC2 ami-c3e40daa with EBS device running Windows Server 2008 Datacenter Edition Service Pack 2, whose results varied widely between my two runs:

IO/SecondThroughput/Second
SQLIO 678.91 10.61Mb
SysBench 408.96 6.39Mb

On this machine, SQLIO results are 66% higher than that of SysBench.

Below is the gory details.

Here are the detailed output on my laptop:
SQLIO
C:\Program Files\SQLIO>sqlio -kR -s300 -dc -b16 -frandom -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 3835.39
MBs/sec: 59.92

C:\Program Files\SQLIO>sqlio -kR -s300 -dc -b16 -frandom -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 3832.00
MBs/sec: 59.87

SysBench
C:\MessAround\sysbench-0.4.12\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbench.e
xe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 run
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
WARNING: Operation time (18446744073709226000.000000) is greater than maximal co
unted value, counting as 10000000000000.000000
WARNING: Percentile statistics will be inaccurate
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (52.143Mb/sec)
3337.16 Requests/sec executed

Test execution summary:
total time: 2.9966s
total number of events: 10000
total time taken by event execution: 2.9343
per-request statistics:
min: 0.01ms
avg: 0.29ms
max: 18446744073709.47ms
approx. 95 percentile: 0.48ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 2.9343/0.00

C:\MessAround\sysbench-0.4.12\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbench.e
xe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 run
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
WARNING: Operation time (18446744073694841000.000000) is greater than maximal co
unted value, counting as 10000000000000.000000
WARNING: Percentile statistics will be inaccurate
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (53.818Mb/sec)
3444.38 Requests/sec executed

Test execution summary:
total time: 2.9033s
total number of events: 10000
total time taken by event execution: 2.8777
per-request statistics:
min: 0.01ms
avg: 0.29ms
max: 18446744073696.34ms
approx. 95 percentile: 15.39ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 2.8777/0.00

Here are the detailed output from Amazon EC2 ami-c3e40daa with EBS device:
SQLIO
c:\Program Files\SQLIO>sqlio -kR -t1 -s300 -dC -frandom -b16 -Fparam.txt -BH -LS

sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
buffering set to use hardware disk cache (but not file cache)
size of file c:\testfile.dat needs to be: 3221225472 bytes
current file size: 0 bytes
need to expand by: 3221225472 bytes
expanding c:\testfile.dat … done.
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 1230.94
MBs/sec: 19.23
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 204
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 98 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

c:\Program Files\SQLIO>sqlio -kR -t1 -s300 -dC -frandom -b16 -Fparam.txt -BH -LS

sqlio v1.5.SG
using system counter for latency timings, 3579545 counts per second
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0×0 (0)
1 thread reading for 300 secs from file c:\testfile.dat
using 16KB random IOs
buffering set to use hardware disk cache (but not file cache)
using specified size: 3072 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 126.88
MBs/sec: 1.98
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 7
Max_Latency(ms): 497
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 13 9 0 3 7 8 8 8 8 8 8 8 2 1 1 1 1 1 1 1 1 0 0 0 2

C:\Users\Administrator\Documents\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbenc
h.exe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 r
un
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (10.64Mb/sec)
680.95 Requests/sec executed

Test execution summary:
total time: 14.6854s
total number of events: 10000
total time taken by event execution: 14.6048
per-request statistics:
min: 0.01ms
avg: 1.46ms
max: 150.29ms
approx. 95 percentile: 4.77ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 14.6048/0.00

C:\Users\Administrator\Documents\sysbench-0.4.12\sysbench\RelWithDebInfo>sysbenc
h.exe -test=fileio -file-total-size=3G -file-test-mode=rndrd -max-time=300 r
un
sysbench 0.4: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Threads started!
Done.

Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (2.1371Mb/sec)
136.77 Requests/sec executed

Test execution summary:
total time: 73.1139s
total number of events: 10000
total time taken by event execution: 73.0284
per-request statistics:
min: 0.02ms
avg: 7.30ms
max: 728.84ms
approx. 95 percentile: 23.08ms

Threads fairness:
events (avg/stddev): 10000.0000/0.00
execution time (avg/stddev): 73.0284/0.00

'Brain Trainning > Storage' 카테고리의 다른 글

DAS, NAS, SAN  (0) 2010.11.17
SQLIO Tips and Tricks for LOB Disk Performance Testing  (0) 2010.11.10
SQLIO 관련 페이지  (0) 2010.11.10
디스크타입별 IOPS  (0) 2010.10.17
장치 대역폭 목록  (0) 2010.09.13
posted by LifeisSimple
prev 1 2 next