블로그 이미지
LifeisSimple

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

Notice

2012. 3. 27. 13:24 Brain Trainning/DataBase

현 DB 시스템의 대기상태를 정리해서 보여주는 쿼리입니다. 

어떤 대기로 인해 시스템 성능이 저하되었는지 확인하고 이에 따른 조치를 하고자할때 필요한 쿼리입니다. 


code by Glenn Berry

WITH Waits AS

    (SELECT

        wait_type,

        wait_time_ms / 1000.0 AS WaitS,

        (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,

        signal_wait_time_ms / 1000.0 AS SignalS,

        waiting_tasks_count AS WaitCount,

        100.0 * wait_time_ms / SUM (wait_time_ms) OVER() AS Percentage,

        ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS RowNum

    FROM sys.dm_os_wait_stats

    WHERE wait_type NOT IN (

        'CLR_SEMAPHORE', 'LAZYWRITER_SLEEP', 'RESOURCE_QUEUE', 'SLEEP_TASK',

        'SLEEP_SYSTEMTASK', 'SQLTRACE_BUFFER_FLUSH', 'WAITFOR', 'LOGMGR_QUEUE',

        'CHECKPOINT_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BROKER_TO_FLUSH',

        'BROKER_TASK_STOP', 'CLR_MANUAL_EVENT', 'CLR_AUTO_EVENT', 'DISPATCHER_QUEUE_SEMAPHORE',

        'FT_IFTS_SCHEDULER_IDLE_WAIT', 'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'BROKER_EVENTHANDLER',

        'TRACEWRITE', 'FT_IFTSHC_MUTEX', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP')

     )

SELECT

     W1.wait_type AS WaitType,

     CAST (W1.WaitS AS DECIMAL(14, 2)) AS Wait_S,

     CAST (W1.ResourceS AS DECIMAL(14, 2)) AS Resource_S,

     CAST (W1.SignalS AS DECIMAL(14, 2)) AS Signal_S,

     W1.WaitCount AS WaitCount,

     CAST (W1.Percentage AS DECIMAL(4, 2)) AS Percentage

FROM Waits AS W1

INNER JOIN Waits AS W2

     ON W2.RowNum <= W1.RowNum

GROUP BY W1.RowNum, W1.wait_type, W1.WaitS, W1.ResourceS, W1.SignalS, W1.WaitCount, W1.Percentage

HAVING SUM (W2.Percentage) - W1.Percentage < 95; -- percentage threshold

GO

posted by LifeisSimple
2012. 3. 20. 23:17 Brain Trainning/DataBase

I/O 특성을 잘 고려해서 시스템을 운영해야 합니다.

특성은 크게 다음과 같이 특성을 구분할 수 있습니다.
1. OLTP 
 - Random IO가 많고 단순하며 Transaction 들이 짧은(?) 것이 특징입니다.
 - 디스크의 입장에서는 여기 저기 많은 곳을 돌아다니면서 데이터를 읽어와야 합니다.

2. OLAP/DSS  
 - Sequencial IO가 많고 Transaction 이 긴것(?) 이 특징입니다. 
 - 디스크의 입장에서는  Range에 대한 검색이 많아집니다.

위와 같은 구분 및 특징은 보편적인 것으로 대략적인 시스템의 구성에 대한 감을 잡을 수 있습니다. 

그러나, 좀더 디테일하게 들어가면 다음과 같은 것들에 대한 파악이 운영에 필요합니다.

I/O 의 측정

1. 운영중인 시스템
 - PhysicalDisk : Disk Reads/sec , PhysicalDisk : Disk Write/sec 카운터를 체크합니다.
 - 드라이브의 위의 카운터를 확인해 현재 디스크의 I/O 를 확인할 수 있습니다.
 - Physical Disk : Avg Disk Queue Length 를 확인해 보고 이 수가 2 이상일 경우 I/O 병목이 발생했다고 할 수 있습니다. 이때 참고해야할 것은 2라는 것은 스핀들 하나에 대한 값이라는 것입니다. 따라서 다수의 디스크를 사용하는 RAID에서는 2/(RAID에 포한된 디스크 수) 입니다. 
 - 만약 병목이 확인되면 Memory : Pages/sec 카운트를 확인해 봅니다. 이 카운트가 높으면 Memory 증설을 통해 I/O 부담을 줄일 수 있습니다.

 디스크 병목이면
 - 다른 물리적 드라이브로 파일 이 동
 - 더 빠른 디스크로 변경
 - RAID 에 디스크 추가
 등의 조치가 필요합니다. 

2. 신규시스템
 필요 디스크의 수
 Disk Transfers/sec (Disk Reads/sec + Disk Writes/sec)
 - Disk Transfers/sec 으로 IOPs를 추산 이를 통해서 필요한 Disk의 수를 계산해줍니다.
 - 공식 ) Required # Disks = (Reads/sec + (Writes/sec * RAID adjuster)) / Disk IOPS
    Raid adjuster : RAID 0 : 1, RAID 1, 10 : 2, RAID 5 : 4 로 추산합니다.

따라서, 디스크 하나의 IOPS를 125 로 계산하고, 2000 Transfers/sec (Read 1200, Write 800) 인 시스템은 RAID 10 으로 묶을 경우 디스크가 
- (1200 + 800 * 2 ) / 125 = 22.4 개의 디스크가 필요합니다. 
결론적으로 만일 산정한 DB Size 가 얼마 되지 않는다면... 돈질이 되겠네요.. 사용하지 않는 공간이 많아질 가능성이 있습니다. 
이때는 Fusion IO, SSD 등 고성능의 디스크 도입을 고려해볼만 합니다. 

 대역폭
 2000 * 8KB = 16MB 입니다.
 보통 FC의 대역폭을 6Gb로 보면 750MB 이므로 충분한 대역폭을 제공한다고 하겠습니다.

위와 같은 것들을 알고 이해하고 있어야 시스템의 I/O 병목을 예방 / 조치 할 수 있습니다. 
 
posted by LifeisSimple
2012. 3. 14. 22:49 Brain Trainning/DataBase
아래의 DMV는 알아두면 삶(?)에 무척 도움이 되는 것들입니다.
대충 내용만 슬쩍봐도... 아... 이정도는 외우고 사용해야지 하는 생각이 드는... 만약 안든다면.. 당신은 DBA가 아닐수도 ㅎㅎ

아래 것들은 기본적으로 사용법과 내용을 알고 있어야 합니다.

1. sys.dm_os_sys_info
- 물리적 메모리 양 또는 CPU 개수, 버퍼 풀 내의 버퍼 개수 등과 같은 시스템의 전반적인 사항을 확인

2. sys.dm_exec_sessions
- 로그인, 호스트, 프로그램 정보, 상태, 환경, 세션사용 CPU, 메모리, I/O 요청 수 등을 보여줌

3. sys.dm_exec_requests
- 현재 활성화 되어 있는 모든 요청에 대한 정보를 보여줌 (블로킹 정보, 쿼리 실행환경 정보, CPU, 메모리, I/O 요청 수 등도 나타남

4. sys.dm_exec_connections
- 각 연결에 대한 상세정보 : 프로토콜, 인증방법, IP주소 등

5. sys.dm_exec_sql_text
- 데이터베이스 ID, 개체 ID, SQL 쿼리 텍스트 정보

6. sys.dm_exec_query_stats
- 캐시에 있는 모든 쿼리 실행 계획 출력. 물리/논리적 I/O 수 , 최소, 최대, 전체 소요 시간 등의 정보

7. sys.dm_exec_procedure_stats
- 모든 저장 프로시저에 대한 정보. 실행 횟수, I/O, 소요시간 등과 같은 실행정보도 포함

8. sys.dm_db_index_usage_stats
- 인덱스에 대한 사용 횟수와 사용자의 마지막 사용 시간. Seek, Scan, Lookup, Update 수 등을 나타냄

9. sys.dm_db_partition_stats
- 테이블의 크기와 행 수를 파악할때 사용

10. sys.dm_db_index_physical_stats
- 각 인덱스와 테이블의 파티션들에 대한 물리적인 정보를 제공. 인덱스와 할당 단위 유형, 조각화 수, 크기, 평균 조각화 비율, 페이지 수, 행 수 등의 정보를 보여줌.

11. sys.dm_sql_referenced_entities
- sysdepends 를 대체.
posted by LifeisSimple
2012. 3. 6. 22:46 Brain Trainning/DataBase

TDE 성능 관련 테스트 자료가 올라왔네요 음... TDE 사용에 고민이 좀 필요할 듯 합니다. 
 Data 복호화가 필요하다고 생각되는 부분에서는 CPU를 많이 잡아먹습니다. 여기에 Data Compression까지 사용한다면... CPU 사용률은 거의 예술적일듯 합니다. 신중하지 않으면 사고사건사례로 올라갈 수 있습니다.

아래의 테스트에 Table Compression 시의 테스트도 하나 들어갔으면 하네요... 얼마나 더 Over 하게 될지... 

출처 : 
http://www.mssqltips.com/sqlservertip/2641/sql-server-transparent-data-encryption-tde-performance-comparison/?utm_source=dailynewsletter&utm_medium=email&utm_content=headline&utm_campaign=201235  

SQL Server Transparent Data Encryption (TDE) Performance Comparison

Problem

Every new release of SQL Server gives us new features that will hopefully improve our database system in some way. Data security has been a hot topic over the last few years and one of the new features introduced in SQL Server 2008, Transparent Data Encryption (TDE), addresses this topic. This feature gives us a way to secure our data on the backend without any front end code changes. The only concern I had with this new feature was exactly how much overhead would it produce on my server. This tip will take a look at the performance impact of implementing this feature using a few basic database operations.

Solution

For those that aren't yet familiar with TDE, you can read the following links which give an overview of TDE and also some information on how to get started implementing it on one of your own databases. In this tip will we will simply focus on the performance impact of having a TDE enabled database versus a database without TDE enabled and will not get into any details on the configuration of this feature.

One thing to note is all the tests below were run on a server running SQL Server 2008 with a 3GB database. Prior to executing each statement I also ran DBCC DROPCLEANBUFFERS to empty the buffer cache. This ensured that for every operation data would have to be decrypted if required and not just read directly from the buffer pool as data in the buffer pool is not encrypted.


Backup Performance

As it's an operation that we perform daily, let's first take a look at how this feature will affect our backup performance. For this test I ran scripts that performed both compressed and uncompressed full backups with TDE enabled and disabled. Each backup script was run 3 times and the averages are displayed below for each type of script execution.

 TDE EnabledTDE not Enabled
 CPU (ms)READSWRITESDURATION (ms)CPU (ms)READSWRITESDURATION (ms)
With compression 234 319 14 57264 195 298 13 51632
No compression 203 319 14 70384 171 300 13 55714

We see from the results above that having TDE enabled results in about a 16% increase in the cpu used during the backup operation. As well, we see the total duration of the backup took anywhere from 10-20% longer with TDE enabled (depending on whether or not you were using backup compression). All in all this is a fairly small price to pay to have your database encrypted. An important thing to note with having TDE enabled on your database is that you get the extra benefit of having your backups encrypted as well thus providing extra security for all of your offline data.


SELECT, INSERT, UPDATE and DELETE Performance

Now that we know the impact TDE is going to have on our backups let's take a look at what if any impact we can anticipate for the applications that access our database. For this test I created a script with 10 SELECT, 5 INSERT, 8 UPDATE and 5 DELETE statements. Each script was run 3 times and the averages are displayed below broken up by statement type.

 TDE EnabledTDE not Enabled
 CPU (ms)READSWRITESDURATION (ms)CPU (ms)READSWRITESDURATION (ms)
SELECT 7390 121576 18 26037103 3062 121397 1 23409175
INSERT 125 784 16 1509177 32 798 16 1254881
UPDATE 500 2181 31 1376947 108 2215 24 1080074
DELETE 502 2380 30 1271479 63 2417 29 708979

Looking at these results we can see that we get approximately a 10% increase in the duration of our SQL statements with TDE enabled which isn't too bad. The real impact of using TDE is evident here with the performance hit we take in CPU. With TDE enabled we use approximately 60% more CPU for the same workload. Keep in mind these results are measuring the performance of the entire workload. Looking at each statement type individually we can see that we take an even bigger performance hit in CPU when doing inserts, updates and deletes as compared to select statements. Depending on the behaviour of your application this may be an important thing to consider when deciding on whether or not you want to use TDE.


Index Rebuild Performance

Another operation that we as DBAs perform routinely, whether it be weekly, monthly or some other schedule, is index maintenance. For this test I created a script that rebuilt every index in my database. The script was run 3 times and the averages are displayed below.

 CPU (ms)READSWRITESDURATION (ms)
TDE Enabled 14936 347345 46028 21898
TDE not Enabled 7420 347116 46190 17133

The results here are pretty consistent with what we had seen above with our other DML statements. For an index rebuild we see approximate a 50% increase in CPU usage and a 20% increase in duration with TDE enabled as compared to having it not enabled.


Summary

While this does not represent every type of operation executed on your database it gives a good indication of what you could expect the impact to be if you were to implement TDE on your database. One thing to keep in mind is that these numbers are a worst case scenario. In a real world application a lot of the data would already be in the buffer pool when a statement is run so it would not have to be decrypted and therefore the performance impact would be much less. I've read from many difference sources that in real world applications you can see anywhere from a 3-10% degradation in performance with TDE enabled. As always you should test in your own environment as your results may vary based on the nature of the data in your database and the applications accessing it.

Next Steps

posted by LifeisSimple
2012. 3. 5. 23:26 Brain Trainning/DataBase
SQL Server 2008에서 지원하기 시작한 DB 암호화 기능입니다.
물론 특별한 스키마 변경없이 사용할 수 있으나...

 CPU에 좀더 부하를 주는 점과 Windows Instant File Initialization 기능을 사용할 수 없음으로 인해 DB파일 생성 및 Growth 에 시간이 걸립니다. (Zero값으로 초기화 하는 시간이 걸리는 거죠) 그리고, 추가로 Data Compression의 효율성을 떨어트립니다.  MS에서는 TDE와 Data Compression을 함께 사용하지 않도록 권고하고 있습니다. 쩝.

전 차라리 정말 필요한 기능이 아니라면 TDE를 사용하지 않겠습니다. 대용량의 OLTP 성 DB라면 더더욱... 중요합니다.

출처 : http://www.simple-talk.com/sql/database-administration/transparent-data-encryption/

Transparent Data Encryption
16 March 2010

 Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server.  With Transparent Data Encryption in place, this requires the original encryption certificate and master key.  It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco  explains fully, and guides you through the process of setting it up.

Keyless entry for automobiles was first introduced by American Motors Corporation in 1983 and, today, there are a variety of methods that are available that permit the user to gain access to their automobile without inserting a physical key into the door lock. These include pushing a button on a device that transmits a radio frequency, entering a code into a key pad located beneath the driver's side door handle, or possessing a device, called a fob, that is detected by the automobile's security system.

Of these devices, only the fob offers transparency to the owner of the vehicle. The fob is recognized by the security system and the door is automatically unlocked; in other words, the fob holder is granted access to the vehicle without any distinguishable action required on his or her part. If a person who does not possess the fob attempts to open the door of the automobile, the door remains locked, denying access into the vehicle.

The experience of the car owner in possession of a fob is similar to the experience of the user attempting to gain access to a database in which Transparent Data Encryption (TDE) has been enabled. TDE is distinct from other techniques in that it secures data by encrypting the physical files of the database, rather than the data itself. The data files for a given database are encrypted using a database encryption key in the user database. This key references a key hierarchy in the Master database, and this dependency prevents the data files from being viewed outside their instance.

Therefore, a valid user can access the decrypted contents of the database files without any distinguishable actions, and without even being aware that the underlying data files are encrypted. However, a would-be data thief, who has obtained access to the data files through a stolen backup file, will find he or she is unable to access the data it contains. Overall, this is a straightforward, low-impact feature that has great security benefits; the only caveat being that it requires SQL Server 2008 Enterprise Edition.

In this article we will explore the considerations of TDE that must be understood prior to its use. We will also walk through an example of implementing and validating TDE using the HomeLending database. Finally, we will cover the process of removing TDE, should the need arise.
 

How TDE Works

Transparent Data Encryption (TDE) was introduced in SQL Server 2008, as a feature of the Enterprise Edition of that product. The Developer Edition of SQL Server 2008 also offers TDE, but its license limits its use to development and testing only.

As noted above, TDE's specific purpose is to protect data at rest by encrypting the physical files of the database, rather than the data. These physical files include the database file (.mdf), the transaction log file (.ldf) and the backup files (.bak).

The protection of the database files is accomplished through an encryption key hierarchy that exists externally from the database in which TDE has been enabled. The exception to this is the database encryption key, which was introduced to the database encryption key hierarchy specifically to support the TDE feature, and is used to perform the encryption of the database files.

In Figure 1, the key hierarchy, and their required location of each key, is illustrated. The service master key exists at the instance level. The database master key and certificate at the Master database are used to protect the database encryption key that is located at the user database, which is the HomeLending database in our example. The database encryption key is then used to decrypt the database files of the user database.

Figure 1: TDE Encryption Key Hierarchy.

The dependency upon the encryption key hierarchy in the Master database, as well as the instance, prevents the database files from being restored to an instance of SQL Server that does not contain the referenced keys. This level of protection is a great comfort if a backup tape that contains your database backup files were to fall into the wrong hands.

Additionally, the encryption of the backup files prevents the plain text values that are contained within the database being disclosed by opening the backup files using a text editor and scanning its contents. The details regarding this scenario will be covered later in this article.

Benefits and Disadvantages of TDE

Comparing TDE to cell-level encryption is a bit like comparing apples to oranges. They are solutions for different challenges. TDE offers general protection to the database while cell-level encryption offers specific protection to data. I would encourage you to consider using TDE in conjunction with other encryption and obfuscation methods, for a layered approach to protection. To determine whether or not TDE should be part of your protection strategy for sensitive data, consider the following benefits and disadvantages.

Benefits

  • Implementation of TDE does not require any schema modifications.
  • Since the physical data files and not the data itself are encrypted, the primary keys and indexes on the data are unaffected, and so optimal query execution can be maintained.
  • The performance impact on the database is minimal. In their whitepaper titled "Database Encryption in SQL Server 2008 Enterprise Edition", Microsoft estimates the performance degradation for TDE to be 3-5%, while cell-level encryption is estimated to be 20-28%. Of course, the impact well may vary, depending upon your specific environment, and volume of data.
  • The decryption process is invisible to the end user.

Disadvantages

  • Use of TDE renders negligible any benefits to be gained from backup compression, as the backup files will be only minimally compressed. It is not recommended to use these two features together on the same database.
  • TDE does not provide the same granular control, specific to a user or database role, as is offered by cell-level encryption.
  • TDE is available only with SQL Server 2008, Enterprise Edition and so will probably not be available to all installations within your environment.

Considerations when Implementing TDE

Prior to implementing TDE, there are several issues to take into consideration, discussed over the following sections.

Master Key Interdependency

The process of implementing TDE involves the creation of a database master key and certificate, or asymmetric key, on the Master database. Only one database master key can be created for a given database so any other user databases that share the instance, and have TDE implemented, will share a dependency upon the Masterdatabase master key.

This interdependency increases the importance of performing a backup of the Master database master key to ensure the continued accessibility of the TDE-enabled databases.

Performance Impact on TempDB

When TDE is initially implemented, the physical file of the TempDB system database is also encrypted. Since theTempDB database contains temporary data from the TDE-enabled database, its encryption is required to maintain full protection by this feature; otherwise the information that is temporarily stored in the TempDB database from the TDE enabled databases would be exposed through the physical files of TempDB.

The TempDB database is used by all user and system databases in the instance to store temporary objects, such as temporary tables, cursors and work tables for spooling. It also provides row versioning and the ability to rollback transactions.

Once the TempDB database is encrypted, any reference and use of this database by other databases, regardless of whether they have TDE enabled or not, will require encryption and decryption. While this encryption and decryption of the TempDB database files remains transparent to the user, it does have a minimal performance impact on the entire instance. Microsoft has estimated the entire impact of TDE on a SQL Server instance to be 3-5% depending on the server environment and data volume.

TDE and Decryption

TDE is designed to protect data at rest by encrypting the physical data files rather than the data itself. This level of protection prevents the data and backup files from being opened in a text editor to expose the file's contents.

TDE encryption occurs prior to writing data to disk, and the data is decrypted when it is queried and recalled into memory. This encryption and decryption occurs without any additional coding or data type modifications; thus it’s transparency. Once the data is recalled from disk, into memory, it is no longer considered to be at rest. It has become data in transit, which is beyond the scope of this feature. As such, alongside TDE, you should consider applying additional supporting layers of protection to your sensitive data, to ensure complete protection from unauthorized disclosure. For example, you may wish to implement, in addition to TDE, encrypted database connections, cell-level encryption, or one-way encryption. For additional data in transit protection that is required, externally from the database, you may need to consult with, or defer to, your Network Administration team.

Backup and Recovery

As noted previously, TDE prevents the backup files from being opened by a plain text editor. It also limits the recovery of the database backup file to the instance that holds the encryption key hierarchy that was in existence at the time the backup was created.

As illustrated in Figure 1, backup files of databases with TDE enabled are encrypted using a key hierarchy that includes the service master key of the SQL Server instance, the database master key and certificate for the Masterdatabase.

Despite this dependency, none of these keys are included with the standard database backup, and must be backed up separately via the following commands:

  • BACKUP SERVICE MASTER KEY to backup of the service master key.
  • BACKUP MASTER KEY to backup of a database master key.
  • BACKUP CERTIFICATE to backup the certificate.

This behavior is one of the security benefits of TDE. In order to restore the encrypted data to another instance of SQL Server, a user needs to recover the service master key backup file, the Master database master key backup file and the Master database certificate private key, prior to recovering the database backup file.

The database encryption key that is created in the user database, in which TDE has been implemented, is included in the standard database backup. It is stored in the boot record of the database file so that it can be accessed and used to decrypt the user database.

When the service master key and database master key are backed up, it is recommended to store their backup files in a separate location from the database files. This separation will ensure continued protection of the encrypted data in the event that the database backup media is stolen or compromised.

TDE and Replication

If the TDE-enabled database is part of a replication setup, the subscribing database must also have TDE implemented. The data that is traveling between the databases will be in plain text and is vulnerable to unauthorized disclosure. A method of encrypting connections, such as secure socket layers (SSL) or Internet protocol security (IPSec), is recommended.

TDE and FileStream Data

The FILESTREAM data type stores large unstructured objects, such as documents and images, in an integrated physical file that is separate from the database file. When TDE is implemented on a user database that containsFILESTREAM data, the filestream files remain unencrypted.

Implementing TDE

In this section, we will implement TDE using the HomeLending database. Our TDE implementation, in comparison to cell-level encryption, will be very simple. There are no modifications to the schema required, there are no permissions that need to be granted to database users and roles in order to use TDE, and there are no additional database objects that must be created to perform the encryption and decryption methods.

On the other hand, the person performing the implementation of TDE does require specific permissions; namelyCONTROL permissions on the Master and HomeLending databases. It is recommended to perform this process while the database is not in use by other users.

Backup before Proceeding

It is a general best practice to backup a database prior to making modifications. However, it is especially important when implementing TDE, in order to ensure that, should the TDE implementation need to be reversed, you can cleanly recover the database in its original form.

In addition, by performing a database backup, a new checkpoint will be established in the transaction log. The creation of a checkpoint truncates all inactive items in your transaction log prior to the new checkpoint. This will ensure that your transaction log is free from unencrypted items, prior to the TDE implementation. Listing 1 shows the backup command for the HomeLending database.

USE HomeLending;

GO

 

BACKUP DATABASE HomeLending

    TO DISK = 'D:\HomeLending\Backup\HomeLending.bak'

    WITH NOFORMAT,

    INIT, 

    NAME = 'HomeLending-Full Database Backup',

    SKIP,

    NOREWIND,

    NOUNLOAD, 

    STATS = 10

GO

Listing 1: Backing up the HomeLending database, prior to TDE.

With the backup successfully completed, we can begin the process of implementing TDE.

The Master Database

Our first step is to create a database master key for our Master database, using the CREATE MASTER KEYmethod, as shown in Listing 2.

USE master;

GO

 

CREATE MASTER KEY

   ENCRYPTION BY PASSWORD = 'MyStr0ngP@ssw0rd2009';

GO

Listing 2: Creating the database master key in the Master database.

Notice that, while ENCRYPTED BY PASSWORD is a required argument to the method, our intent, as in Chapter 5, is to instead protect the database master key with the service master key. This option is automatically available to us, upon creation of the database master key.

A search against the sys.key_encryptions catalog view for the ##MS_DatabaseMasterKey## key, as shown in Listing 3, returns ENCRYPTION BY MASTER KEY, in reference to the service master key.

USE master;

GO

 

SELECT

    b.name,

    a.crypt_type_desc

FROM

    sys.key_encryptions a

    INNER JOIN sys.symmetric_keys b

        ON a.key_id = b.symmetric_key_id

WHERE

    b.name = '##MS_DatabaseMasterKey##';

GO

Listing 3: Confirming protection of the database master key by the service master key.

The next step is to create a self-signed certificate that is protected by the database master key of our Masterdatabase. All certificates created within SQL Server, as opposed to being imported, are self-signed. This associates the certificate to the database.

Certificates are created using the CREATE CERTIFICATE method:

USE HomeLending;

GO

 

CREATE CERTIFICATE MyHighCert

    WITH SUBJECT = 'Cert used for sensitive class of high';

GO

Listing 4.

Since this certificate is located in the Master database and will be used to protect the database encryption key of our HomeLending database, we will name this certificate MasterCert, as shown in Listing 5.

USE master;

GO

 

CREATE CERTIFICATE MasterCert

    WITH SUBJECT = 'Cert used for TDE';

GO

Listing 5: Creating the MasterCert self-signed .

As for Listing 4, by omitting the ENCRYPTION BY PASSWORD argument, we are specifying that the certificate is to be protected by the database master key.

At this point in the process you should perform a backup of the certificate with its private key, using the BACKUPCERTIFICATE command shown in Listing 6. In the event that the HomeLending database needs to be restored, this certificate and its private key will be required.

USE master;

GO

 

BACKUP CERTIFICATE MasterCert

    TO FILE = 'D:\HomeLending\Backup\MasterCert.bak'

    WITH PRIVATE KEY (

               FILE = 'D:\HomeLending\Backup\MasterCert.pvk',

               ENCRYPTION BY PASSWORD = 'MyB@ckUpP@ssw0rd');

GO

Listing 6: Backing up the MasterCert certificate.

Since our MasterCert certificate is protected by the Master database master key, the DECRYPTION BYPASSWORD argument is not included in the WITH PRIVATE KEY argument of this command.

The User Database

Having created the database master key and the MasterCert certificate in the Master database, we are ready to create the database encryption key for the HomeLending database which we will use to perform the cryptographic functions for the physical files of our database.

The database encryption key is created using the CREATE DATABASE ENCRYPTION KEY command. The arguments to this method include:

  • WITH ALGORITHM: Specifies the algorithm used, which in turn dictates the strength of the key.
  • ENCRYPTION BY:  Defines the protection method of the key. The key used in the ENCRYPTION BYargument can be a certificate or an asymmetric key that is located in the Master database.

Listing 7 shows the exact command used for the HomeLending database's database encryption key.

USE HomeLending;

GO

 

CREATE DATABASE ENCRYPTION KEY

    WITH ALGORITHM = AES_128

    ENCRYPTION BY SERVER CERTIFICATE MasterCert;

GO

Listing 7: Creating the HomeLending database encryption key.

The AES_128 option specifies Advanced Encryption Standard (AES) with a 128 bit key length, and we protect the database encryption key with the MasterCert certificate that was created in the Master database.

The final step in the setup process of TDE is to enable it. This is accomplished by executing the ALTER DATABASE command with the SET ENCRYPTION ON argument.

USE HomeLending;

GO

 

ALTER DATABASE HomeLending

    SET ENCRYPTION ON;

GO

Listing 8: Enabling TDE.

At this point, an encryption scan occurs, which is the process by which the physical files of the database are scanned and encrypted. Included in this scan process are the database files, TempDB database files and transaction log files.

Transaction log files contain information that is used to maintain data integrity and are used in the restoration process. Within these files are a series of smaller units called virtual log files (VLFs). These VLFs contain records that pertain to transactions within the database file. Prior to the implementation of TDE, these VLFs contain unencrypted data. During the encryption scan any pages that have been in the buffer cache and modified, known asdirty pages, are written to disk, a new VLF is created and the prior inactive VLFs are truncated. This results in a transaction log that only contains encrypted data.

The duration of the encryption scan will vary depending upon the size of the database files. Once the process has completed, the encryption_state column in the sys.dm_database_encryption_keys dynamic management view will reflect the encryption state of "encrypted", and will show the value of "3" in this column, for our HomeLendingdatabase.

Verifying TDE

Once the implementation of TDE is complete there are a few ways you can verify that these steps indeed succeeded.

Using Dm_Database_Encryption_Keys

Dynamic management views (DMV) are built-in views that provide metadata regarding the settings, health and properties of SQL Server instances and databases. The sys.dm_database_encryption_keys DMV presents information about the database encryption keys used in a given database, as well as the encryption state of the database.

Note:
Database encryption keys are only utilized for the benefit of the TDE feature of SQL Server 2008; therefore this DMV is not available in SQL Server 2005.

Through the use of a query in which the sys.dm_database_encryption_keys DMV and the sys.databasescatalog view are joined through the database_id column, we are able to determine the success of the TDE implementation, as demonstrated in Listing 9.

USE master;

GO

 

SELECT

    db.name,

    db.is_encrypted,

    dm.encryption_state,

    dm.percent_complete,

    dm.key_algorithm,

    dm.key_length

FROM

    sys.databases db

    LEFT OUTER JOIN sys.dm_database_encryption_keys dm

        ON db.database_id = dm.database_id;

GO

Listing 9: Verifying TDE using dm_database_encryption_keys.

A return value of "1" for the is_encrypted column of the sys.databases catalog view indicates that the database has been encrypted through TDE.

The value of the encryption_state column from the sys.dm_database_encryption_keys DMV reflects whether or not the encryption process is complete. A value of "3" indicates that the encryption process is complete. A value of "2" in this column indicates that the encryption process is in progress. The percent_complete column from the same DMV indicates the progress of the encryption process. This column only reflects a value other than "0" when the database encryption state is in the process of changing (being encrypted or decrypted).

In this sample query, I added the key_algorithm and key_length columns to illustrate an interesting dynamic in regard to the TempDB database, as shown in the results in Table 1.

Table 1: Results of TDE verification query.

As previously noted, the encryption of the TempDB is a byproduct of implementing TDE on any given database within an instance of SQL Server. The is_encrypted column for our HomeLending database contains the value of "1" which indicates that it has been successfully encrypted; but the TempDB contains the value of "0", while the values in the other columns indicate that encryption has taken place. This is because the TempDB database is encrypted outside of the established TDE key hierarchy.

This is further emphasized by the algorithm that is used to encrypt the TempDB database. As you will recall, the creation of the database encryption key for the HomeLending database was designated as AES_128, which uses a key length of 128 bits. The results of this query show that the TempDB database is actually using a key length of 256 bits.

The reason for the separate encryption process lies in the inherent behavior of the TempDB database; when the SQL Server instance is stopped and started the TempDB database is dropped and recreated. This can be verified by performing the following steps:

  1. Stop the SQL Server instance.
  2. Start the SQL Server instance.
  3. Execute SELECT * FROM SYS.DATABASES, using the Master database.

The result of the third step will reveal that the column titled CREATE_DATE for the TempDB database will be approximately the date and time that you restarted the SQL Server instance. When thesys.dm_database_encryption_keys DMV is executed, the database encryption key for the TempDB database will still be included in the results and the column titled CREATE_DATE will also reflect the time that the instance was restarted. This illustrates that when the TempDB database is recreated so is its database encryption key.

At first glance the comparison of the CREATE_DATE columns of the sys.databases andsys.dm_database_encryption_keys DMV may raise concern since they do not match; but consider that thesys.dm_database_encryption_keys DMV reflects the date and time in Greenwich Mean Time (GMT) while thesys.databases catalog view reflects the date and time according to your time zone. Depending on your location this may appear to be in the future or in the past. In my case, being on Eastern Standard Time (EST) in the United States the sys.dm_database_encryption_keys DMV CREATE_DATE is five hours into the future.

Verification through Backup and Recovery

Another method of verifying the success of a TDE implementation is to perform a backup of the database, after TDE has been enabled, as shown in Listing 10. When doing so, make sure not to overwrite the backup file that was created prior to implementing TDE.

USE HomeLending;

GO

 

BACKUP DATABASE HomeLending

    TO DISK = 'D:\HomeLending\Backup\HomeLending_PostTDE.bak'

    WITH NOFORMAT,

    INIT, 

    NAME = 'HomeLending-Full Database Backup',

    SKIP,

    NOREWIND,

    NOUNLOAD, 

    STATS = 10

GO

Listing 10: Backing up the HomeLending database after TDE is implemented.

The next step is to compare the contents of the pre-TDE and post-TDE backup files, by opening both files up in a simple text editor such as Notepad, Wordpad or Textpad. We can perform a search within the pre-TDE backup file for the plain text of a known sensitive data value. For example, we will search for the value of "319726 Rocky Fabien Avenue" which is contained in the Borrower_Address table in the HomeLending database.

This search reveals the searched value in plain text, as shown in Figure 2. In addition, if you were to manually scan through the backup file, you would find that the metadata of our database objects, such as tables, views, stored procedures and user defined functions are revealed in plain text.

Figure 2: Backup File - Unencrypted.

The same search on our post-TDE backup file will result in the message box shown in Figure 3, stating that it cannot find the requested value. This is because the entire backup file, including the metadata of our database objects, has been encrypted and no longer contains any plain text values.

Figure 3: Search Results in Encrypted Backup File.

One final test in regard to the backup file is to attempt to restore the post-TDE backup file onto a different instance than the one in which the HomeLending database resides, using the RECOVER DATABASE command, as shown in Listing 11.

USE master;

GO

 

RESTORE DATABASE HomeLending

    FROM DISK = 'D:\HomeLending\Backup\HomeLending_PostTDE.bak'

    WITH FILE = 1, 

    NOUNLOAD, 

    REPLACE, 

    STATS = 10

GO

Listing 11: Attempting to restore the HomeLending database.

This attempt will return an error message that states that the certificate at the Master database level, in which theHomeLending database encryption key is protected, does not exist; therefore the attempt will fail.

Msg 33111, Level 16, State 3, Line 2

Cannot find server certifiate with thumbprint…

 

Msg 3013, Level 16, State 3, Line 2
RESTORE DATABASE is terminating abnormally

Using EXECUTE AS

Finally, we can perform a test to determine that the data that is contained within the encrypted HomeLendingdatabase files can be read by valid users of the database, as shown in Listing 12. We use EXECUTE AS USER to impersonate various users within the database and test their ability to access the encrypted data. The use ofREVERT terminates the impersonation and returns us to our original user account.

USE HomeLending;

GO

 

-- execute as a user who is a member of Sensitive_high role

EXECUTE AS USER = 'WOLFBA';

GO

SELECT * FROM dbo.Borrower;

GO

REVERT;

GO

 

-- execute as a user who is a member of Sensitive_medium role

EXECUTE AS USER = 'KELLEYWB';

GO

SELECT * FROM dbo.Borrower;

GO

REVERT;

GO

 

-- execute as a user who is a member of Sensitive_low role

EXECUTE AS USER = 'JONESBF';

GO

SELECT * FROM dbo.Borrower;

GO

REVERT;

GO

Listing 12: Verifying TDE using EXECUTE AS queries.

Each query in the above script successfully returns the contents of the Borrower table within the HomeLendingdatabase. This demonstrates that the automatic decryption is functioning as expected, and verifies that permissions to the appropriate database objects are not affected.

Reversing the Implementation of TDE

It was once said that the only things certain in life are death and taxes. It could be argued that change is another certainty. You may find yourself in a situation where TDE has been implemented, you have validated that it works, are ready for users to begin using the newly-encrypted database and then, lo-and-behold, a request to reverse TDE comes your way.

Boss: "I would like you to proceed with implementing TDE immediately …"

DBA: "Cool, I'll get right on it."
(DBA Implements TDE)

Boss: "Hey, as I was saying yesterday: I would like you to proceed with implementing TDE next week after our presentation to the Technology Committee."

DBA: "No problem … ah …"
(Begin reversal process)

If, at the time this request comes your way, no transactions have been performed on the encrypted database then you can reverse the TDE implementation using the following steps:

  1. Restore the backup file of the HomeLending database that was created before TDE was implemented.
  2. Drop the certificate that was created in the Master database. This should only be done if there are no other user databases in the instance that have been TDE-enabled. If there are other user databases in the instance that have TDE enabled, you will want to leave the Master database items untouched.
  3. Drop the database master key that was created in the Master database. This should only be done if there are no other user databases in the instance that have TDE enabled. Otherwise, you will want to leave theMaster database items untouched.
  4. Restart the instance in which the HomeLending database resides. If there are not any other user databases on the instance that have TDE implemented, this action will force the recreation of the TempDBdatabase in an unencrypted format.

Listing 13 shows the code to implement these steps.

USE master;

GO

 

RESTORE DATABASE HomeLending

    FROM DISK = 'D:\HomeLending\Backup\HomeLending.bak'

    WITH FILE = 1, 

    NOUNLOAD, 

    REPLACE, 

    STATS = 10;

GO

 

DROP CERTIFICATE MasterCert;

GO

 

DROP MASTER KEY;

GO

 

-- Restart Instance Though SQL Server Management Studio:

-- Right-Click instance and click on "Restart" option.

Listing 13: Reversing TDE when no transactions have occurred.

If the request to reverse the implementation of TDE comes after transactions have occurred, or a copy of the pre-TDE backup file is no longer available, the following steps can be performed:

  1. Alter the HomeLending database to have the ENCRYPTION option set to the value of OFF.
  2. Wait until the decryption process is complete. Use the sys.dm_database_encryption_keys DMV to determine its status. A value of "1" returned in the encryption_status column indicates that the decryption is complete.
  3. Drop the database encryption key for the HomeLending database.
  4. Restart the instance in which the HomeLending database resides. If there are not any other user databases on the instance that have TDE implemented, this action will force the recreation of the TempDBdatabase in an unencrypted format.

Listing 14 shows the code to implement these steps.

USE HomeLending;

GO

 

ALTER DATABASE HomeLending

    SET ENCRYPTION OFF;

GO

 

-- Before proceeding, wait until the decryption process is complete.

-- Use the sys.dm_database_encryption_keys dmv to determine this.

 

DROP DATABASE ENCRYPTION KEY;

GO

 

-- Restart Instance Though SQL Server Management Studio:

-- Right-Click instance and click on "Restart" option.

Listing 14: Reversing TDE after transactions have occurred.

We will need to keep the certificate and database master key that was setup in the Master database, since there will remain some dependencies upon these keys that affect the recoverability of the database.

It is recommended to perform either of these removal options while the database is not in use by other users. In addition, it is always recommended that a backup be made of the database, the database master key and certificate prior to reversing TDE.

Summary

Through this demonstration we have successfully implemented TDE for the HomeLending database and validated that it is enabled. In addition, the steps to reverse TDE have been provided in the event that it is requested.

While TDE does not offer the granularity of protection that cell-level encryption offers, it does provide a level of protection that cannot be achieved through cell-level encryption. This feature provides a means to render unreadable the plain text that would otherwise be disclosed in the database backup files.

Thanks to the encryption key hierarchy that is external to the database, additional protection is offered by restricting restoration of the encrypted database to the instance from which the backup files were created.

Implementing TDE in conjunction with cell-level encryption provides a layered approach to data security, which enhances its effectiveness. Another option for protecting sensitive data that is available is one-way encryption, also referred to as hashing. One-way encryption can be applied alongside TDE, cell-level encryption and other obfuscation methods.



This article has been viewed 12950 times.
posted by LifeisSimple
2012. 3. 4. 21:34 Brain Trainning/DataBase
posted by LifeisSimple
2012. 2. 26. 08:08 Brain Trainning/DataBase
다양한 옵션을 가진 스트레스 테스트 툴입니다. 
UI도 간단하고 옵션설정도 다양합니다. HA 테스트도 된다는... 강점이 후~

예전에는  MS 것을 사용했는데 ... 이제는 이녀석으로 전환해야겠군요 ㅎㅎ

http://www.sqlstress.com/

SQLStress is a tool to stress test a Microsoft SQL Server installation. Its main purpose is to find infrastructure problems within an installation. It can also be used for hardware sizing, system tuning or benchmarking. The focus is on simplicity. You should be up and running within minutes.

Prerequisite is an installed version of the .Net Framework 2.0. The tool will run on x86, x64 and Itanium. It is tested on Microsoft SQL Server 2008 and Microsoft SQL Server 2005, however most of the functionality will work on Microsoft SQL Server 2000 as well.

Features

  • Load testing: Statement mix can be defined to remodel actual workloads
  • Implements performance counters that include statement response times
  • Can combine statements from different users to a distributed transaction exercising MSDTC
  • Can use databases on multiple instances
  • Can be used for cluster failover tests
  • Can be used for mirroring failover tests
  • Very efficient, can simulate thousands of virtual users
  • Transaction scheduler can sustain a specified number of transactions per second
  • Can be used to run stress tests against an SMTP server

The current version of the tool is free, but I reserve the right to create a commercial version with additional functionality in the future.


posted by LifeisSimple
2012. 2. 25. 16:33 Brain Trainning/DataBase
 DB가  Suspect모드로 들어가게 되면 다음과 같은 방법으로 처리합니다. 2005버전
(특히나 빠른 복구시간을 요하는 경우 얼른 처리를)

출처 : http://www.MSSQLTips.com/tip.asp?id=1914

1. 상태변경 (Turn off Suspect flag on Database)
Exec sp_resetstatus 'DB_Name'

2. Emergency Mode 로 전환해서 문제해결 (해결중에 다른 사용자의 접근 차단)
Alter Database DB_Name SET EMERGENCY

3. 정합성 체크
DBCC checkdb ('DB_Name')

4. 정합성에 큰 문제가 없다면 Single_User Mode 로 전환
Alter Database DB_Name Set Single_User with Rollback Immediate

5. 데이터 에러 체크 (일부 데이터 유실 허용)
DBCC CheckDB ('DB_Name', Repair_Allow_Data_Loss)

6. Multi_User 모드 변환 후 정상 서비스
Alter Database DB_Name Set Multi_User


이런 방식으로 복구를 해줍니다. 
 
posted by LifeisSimple
2012. 2. 18. 08:59 Brain Trainning/DataBase
어쩔 수 없이 동적 쿼리를 사용하다가 보면 가끔씩 아무 생각없이 넘어가는 부분이 많이 있습니다. 
이런 몇가지 부분을 고민해서 해결한다면 좋은 성능과 유지보수 용이성을 가진 쿼리를 만들 수 있습니다.

1.  SQL Injection
 - 동적 쿼리를 만들다 보면 흔히 고민하게 되는 부분입니다. 명확한 파라미터 타입 사용, char 타입의 사용시에는 꼭 필요한 길이만, 그리고, 마지막으로 변수값에 대한 필터링이 필요하겠습니다.

2.  Like 문
 - 보통 Like 를 사용할때 문장을 연결하는 경우가 있습니다.
 
 set @strSQL = ' select top 10 * from mt_song_tmp where song_name like ''' + @likeParam + '%'' '

   이렇게 바꿀 수 있습니다.
   set @strSQL = ' select top 10 * from mt_song_tmp where song_name like @likeParam ' <-- sp_executesql 을 사용해야 합니다.

   위의 두 쿼리는 파라미터 값만 변경하면 됩니다. 첫번 것은 보통 파라미터 값이 '물건' 이렇게 들어온다면 수정한 문장에서 사용되어 지는 파라미터 값은 '물건%' 라고 하면 되겠지요.

3. 디버깅
 - 동적쿼리는 디버깅 하기가 좀 귀찮은 면이 있습니다. 그렇지만 간단한 파라미터 하나로 디버깅을 할 수 있도록 하면 유지보수성이 좋아집니다. #debug 같은 느낌이랄까요.
  create proc dbo.UP_GetProductInfo
    @likeParam varchar(100),
    @debug bit = 0
  as 
  .......

 if @debug = 1
   print @likeParam -- or 실행될 문장
 else
   exec sp_executesql -- 직접 실행 

4. 유지보수
 - 가끔씩 어떤 테이블을 변경하면서 관련된 쿼리를 찾아야 하는 경우가 발생합니다. 이럴때는 sp_depends 를 사용하게 되는데요. 동적쿼리를 사용하다보면 이 녀석들은 안나타나게 됩니다. 특히나 테이블명을 동적으로 변경하는 쿼리들은 알수가 없게 됩니다. (DMV 쿼리 구문으로도 찾기 어려운 경우가 있습니다.)
이럴때를 대비해서 아래와 같이 가볍게(?) 하나 넣어주게 되면 향후 유지보수성이 좋아지게 됩니다.

if 1 = 2
begin
  select top(0) 0 from mt_song_tmp;
end 


이상 몇가지를 정리해봤습니다. 

동적쿼리는 구현해야하는 동기 만큼이나 귀찮은 녀석입니다. 조금만 더 신경쓴다면 시스템을 장애로 밀어넣는 훌륭한(?) 품질의 쿼리를 생산하지 않으리라 생각됩니다. 향후 유지보수에도 큰 도움이 될 것입니다.

대략적인 쿼리의 모습은 다음과 같습니다.

create proc dbo.UP_GetSongINFO

           @likeParam varchar(100),

           @debug bit = 0

as

 

set nocount on

 

declare @strSQL nvarchar(2000)

declare @paramDefinition nvarchar(1000)

 

set @strSQL = 'select * from mt_song_tmp where song_nm like @likeParam'

 

set @paramDefinition = '@likeParam varchar(100)'

 

if @debug = 1

begin

           print @likeParam

end

else

           exec sp_executesql @strSQL, @paramDefinition, @likeParam

 

 

if 1 = 2

begin

           select top (0) 0 from mt_song_tmp with (nolock)

end

 

  
 
posted by LifeisSimple
2012. 2. 17. 17:55 Brain Trainning/DataBase
인덱스에 대한 정리... 

출처 :  http://www.sqlservercentral.com/stairway/72399/ 

Stairway to SQL Server Indexes: Level 1, Introduction to Indexes

Indexes are the database objects that enable SQL Server to satisfy each data access request from a client application with the minimum amount of effort, resulting in the maximum performance of individual requests while also reducing the impact of one request upon another. Prerequisites: Familiarity with the following relational database concepts: Table, row, primary key, foreign key  Read more...
By David Durant 2011/03/16 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 18,188 reads

Stairway to SQL Server Indexes: Level 2, Deeper into Nonclustered Indexes

By this stage, you should be familiar with the basics of SQL Server indexes. We've discussed what an Index actually is, as well as some of the most common types you're likely to encounter. Now that we've seen some simple examples of how Indexes can be useful, we're going to delve deeper into nonclustered indexes, as we'll see how they can improve the performance of more complex queries.  Read more...
Rating: |   Discuss |   Briefcase | 6,325 reads

Stairway to SQL Server Indexes: Level 3, Clustered Indexes

Now that we've seen the basics of indexing, and taken a deeper dive into Nonclustered Indexes, this Level will focus on searching the table, which will, in turn, will lead us to a discussion of clustered indexes.  Read more...
By David Durant 2011/06/22 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 10,416 reads

Stairway to SQL Server Indexes: Level 4, Pages and Extents

We've now seen how indexed and non-indexed tables perform in queries, and established "logical reads" as the metric for query performance. Now it is time to explain why logical reads are an excellent metric and also explain what is actually being read.   Read more...
Rating: |   Discuss |   Briefcase | 4,110 reads

Stairway to SQL Server Indexes: Level 5, Included Columns

Included columns enable nonclustered indexes to become covering indexes for a variety of queries, improving the performance of those queries and with the only overhead being an increase the size of an index.  Read more...
By David Durant 2011/07/13 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 7,751 reads

Stairway to SQL Server Indexes: Level 6, Bookmarks

A nonclustered index entry consists of search key columns, included columns, and the bookmark. The bookmark value will be either a RID or the clustered index's key, depending upon whether the table is a heap or a clustered index. Choosing the best clustered index for a table requires that you follow three guidelines to ensure that the index key will make a good bookmark.  Read more...
By David Durant 2011/08/03 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 6,999 reads

Stairway to SQL Server Indexes: Level 7, Filtered Indexes

A filtered index eliminates unusable entries from the index, producing an index that is as beneficial as, but much smaller than, an unfiltered index. If a certain subset of a table's data is frequently requested, a filtered index can also be a covering index; resulting in a considerable reduction in IO.  Read more...
By David Durant 2011/08/24 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 6,942 reads

Stairway to SQL Server Indexes: Level 8, Unique Indexes

Indexes that ensure data integrity in addition to locating rows.  Read more...
By David Durant 2011/09/14 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 6,254 reads

Stairway to SQL Server Indexes: Level 9, Reading Query Plans

Determining how, and if, SQL Server is using your indexes.  Read more...
By David Durant 2011/10/05 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 7,115 reads

Stairway to SQL Server Indexes: Level 10, Index Internal Structure

A detailed look the B-Tree structure – Leaf level and non-leaf level.  Read more...
By David Durant 2012/01/20 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 6,957 reads

Stairway to SQL Server Indexes: Level 11, Index Fragmentation

Internal and external fragmentation – Causes, cures, and when not to care  Read more...
By David Durant 2012/02/01 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 7,727 reads

Stairway to SQL Server Indexes: Level 12, Create Alter Drop

Options and impacts when creating, altering and dropping an index  Read more...
By David Durant 2012/02/17 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 1,925 reads

Stairway to SQL Server Indexes: Level 13, Insert Update Delete

What happens to your indexes during normal database activity?  Read more...
By David Durant 2012/02/29 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 1,309 reads

Stairway to SQL Server Indexes: Level 14, Index Statistics

A look at the information that helps SQL Server make better decisions.  Read more...
By David Durant 2012/03/16 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 1,545 reads

Stairway to SQL Server Indexes: Level 15, Indexing Best Practices

Dos and don’ts that lead to a healthy, well-performing database.  Read more...
By David Durant 2012/03/28 | Source: SQLServerCentral.com | Category: stairway series
Rating: |   Discuss |   Briefcase | 2,081 reads
posted by LifeisSimple
prev 1 2 3 4 5 6 7 ··· 20 next