블로그 이미지
LifeisSimple

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Notice

2017. 3. 20. 11:39 Brain Trainning/DataBase

출처 : https://blogs.msdn.microsoft.com/psssql/2016/10/04/default-auto-statistics-update-threshold-change-for-sql-server-2016/

Default auto statistics update threshold change for SQL Server 2016



Lately, we had a customer who contacted us for a performance issue where their server performed much worse in SQL Server 2016 following upgrade.  To show us as an example, he even captured a video.  In the video, he showed that the session that was compiling the query had multiple threads waiting on LATCH_EX of ACCESS_METHODS_DATASET_PARENT.  This type of latch is used to synchronize dataset access among parallel threads.  In general, it deals with large amount of data.   Below is a screenshot from the video. Note that I didn’t include complete columns because I don’t want to reveal customer’s database and user names.   This is very puzzling because we should not see parallel threads during true phases of compiling.

 

image

 

 

After staring at it for a moment, we started to realize that this must have something to do with auto update statistics.  Fortunately, we have a copy of pssdiag captured that include trace data.  To prove that auto update statistics could have caused the issue, we needed to find some evidence of long running auto update stats event.  After importing the data, we were able to find some auto update stats took more than 2 minutes.  These stats update occurred to the queries customer pointed out.  Below is an example of auto update in profiler trace extracted from customer’s data collection.

 

 

image

 

 

Root cause & SQL Server 2016 change

This turned out to be the default auto stats threshold change in SQL 2016.

KB Controlling Autostat (AUTO_UPDATE_STATISTICS) behavior in SQL Server documents two thresholds.  I will call them old threshold and new threshold.

Old threshold: it takes 20% of row changes before auto update stats kicks (there are some tweaks for small tables, for large tables, 20% change is needed).  For a table with 100 million rows, it requires 20 million row change for auto stats to kick in. For vast majority of large tables, auto stats basically doesn’t do much.

New threshold: Starting SQL 2008 R2 SP1, we introduced a trace flag 2371 to control auto update statistics better (new threshold).  Under trace flag 2371, percentage of changes requires is dramatically reduced with large tables.  In other words, trace flag 2371 can cause more frequent update.  This new threshold is off by default and is enabled by the trace flag.  But in SQL 2016, this new threshold is enabled by default for a database with compatibility level 130.

In short:

SQL Server 2014 or below: default is the old threshold.  You can use trace flag 2371 to activate new threshold

SQL Server 2016:  Default is new threshold if database compatibility level is 130.  If database compatibility  is below 130, old threshold is used (unless you use trace flag 2371)

Customer very frequently ‘merge’ data into some big tables. some of them had 300 million rows.  The process triggered much more frequent stats update now because of the threshold change for the large tables.  

 

Solution

The solution is to enable asynchronous statistics update.  After customer implemented this approach, their server performance went back to old level.

 

image

 

Demo of auto stats threshold change


–setup a table and insert 100 million rows
drop database testautostats
go
create database testautostats
go
use testautostats
go
create table t (c1 int)
go
set nocount on
declare @i int
set @i = 0
begin tran
while @i < 100000000
begin
declare @rand int = rand() * 1000000000
    if (@i % 100000 = 0)
    begin
        while @@trancount > 0     commit tran
        begin tran
    end
    insert into t values (@rand)
    set @i  = @i + 1
end
commit tran

go
create index ix on t (c1)
go

 

 

–run this query and query stats property 
–note the last_updated column
select count (*) from t join sys.objects o on t.c1=o.object_id
go
select * from sys.stats st cross apply sys.dm_db_stats_properties (object_id, stats_id) 
where st.object_id = object_id (‘t’)

image

 

–delete 1 million row
–run the same query and query stats property
–note that last_updated column changed
delete top (1000000) from t
go
select count (*) from t join sys.objects o on t.c1=o.object_id

go
select * from sys.stats st cross apply sys.dm_db_stats_properties (object_id, stats_id) 
where st.object_id = object_id (‘t’)

 

image

 

–now switch DB compt level to 120
–delete 1 million row
–note that stats wasn’t updated (last_updated column stays the same)
alter database testautostats SET COMPATIBILITY_LEVEL=120
go
delete top (1000000) from t
go
select * from sys.stats st cross apply sys.dm_db_stats_properties (object_id, stats_id) 
where st.object_id = object_id (‘t’)

 

image


추가 : http://www.sqlservergeeks.com/sql-server-trace-flag-2371-to-control-auto-update-statistics-threshold-and-behavior-in-sql-server/


posted by LifeisSimple
2016. 7. 6. 16:17 Brain Trainning/DataBase

Understanding how SQL Server stores data in data files


By:    |   Read Comments   |   Related Tips: More > Database Administration


출처 : https://www.mssqltips.com/sqlservertip/4345/understanding-how-sql-server-stores-data-in-data-files/


Problem

Have you ever thought about how SQL Server stores data in its data files? As you know, data in tables is stored in row and column format at the logical level, but physically it stores data in data pages which are allocated from the data files of the database. In this tip I will show how pages are allocated to data files and what happens when there are multiple data files for a SQL Server database. 

Solution

Every SQL Server database has at least two operating system files: a data file and a log file. Data files can be of two types: Primary or Secondary.  The Primary data file contains startup information for the database and points to other files in the database. User data and objects can be stored in this file and every database has one primary data file. Secondary data files are optional and can be used to spread data across multiple files/disks by putting each file on a different disk drive. SQL Server databases can have multiple data and log files, but only one primary data file. Above these operating system files, there are Filegroups. Filegroups work as a logical container for the data files and a filegroup can have multiple data files.

The disk space allocated to a data file is logically divided into pages which is the fundamental unit of data storage in SQL Server. A database page is an 8 KB chunk of data. When you insert any data into a SQL Server database, it saves the data to a series of 8 KB pages inside the data file. If multiple data files exist within a filegroup, SQL Server allocates pages to all data files based on a round-robin mechanism. So if we insert data into a table, SQL Server allocates pages first to data file 1, then allocates to data file 2, and so on, then back to data file 1 again. SQL Server achieves this by an algorithm known as Proportional Fill.

The proportional fill algorithm is used when allocating pages, so all data files allocate space around the same time. This algorithm determines the amount of information that should be written to each of the data files in a multi-file filegroup based on the proportion of free space within each file, which allows the files to become full at approximately the same time. Proportional fill works based on the free space within a file.

Analyzing How SQL Server Data is Stored

Step 1: First we will create a database named "Manvendra" with three data files (1 primary and 2 secondary data files) and one log file by running the below T-SQL code. You can change the name of the database, file path, file names, size and file growth according to your needs.

CREATE DATABASE [Manvendra]
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'Manvendra', FILENAME = N'C:\MSSQL\DATA\Manvendra.mdf',SIZE = 5MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB ),
( NAME = N'Manvendra_1', FILENAME = N'C:\MSSQL\DATA\Manvendra_1.ndf',SIZE = 5MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB ),
( NAME = N'Manvendra_2', FILENAME = N'C:\MSSQL\DATA\Manvendra_2.ndf' ,SIZE = 5MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB )
 LOG ON
( NAME = N'Manvendra_log', FILENAME = N'C:\MSSQL\DATA\Manvendra_log.ldf',SIZE = 10MB , MAXSIZE = 1GB , FILEGROWTH = 10%)
GO

Step 2: Now we can check the available free space in each data file of this database to track the sequence of page allocations to the data files. There are multiple ways to check such information and below is one option. Run the below command to check free space in each data file.

USE Manvendra
GO
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

You can see the data file names, file IDs, physical name, total size and available free space in each of the database files.

data file free spaces post SQL Server database creation

We can also check how many Extents are allocated for this database. We will run the below DBCC command to get this information. Although this is undocumented DBCC command this can be very useful information.

USE Manvendra
GO
DBCC showfilestats

With this command we can see the number of Extents for each data file. As you may know, the size of each data page is 8KB and eight continuous pages equals one extent, so the size of an extent would be approximately 64KB. We created each data file with a size of 5 MB, so the total number of available extents would be 80 which is shown in column TotalExtents, we can get this by (5*1024)/64.

UsedExtents is the number of extents allocated with data. As I mentioned above, the primary data file includes system information about the database, so this is why this file has a higher number of UsedExtents.

used extents post SQL Server database creation

Step 3: The next step is to create a table in which we will insert data. Run the below command to create a table. Once the table is created we will run both commands again which we ran in step 2 to get the details of free space and used/allocated extents.

USE Manvendra;
GO
CREATE TABLE [Test_Data] (
    [Sr.No] INT IDENTITY,
    [Date] DATETIME DEFAULT GETDATE (),
    [City] CHAR (25) DEFAULT 'Bangalore',
 [Name] CHAR (25) DEFAULT 'Manvendra Deo Singh');

Step 4: Check the allocated pages and free space available in each data file by running same commands from step 2. 

USE Manvendra
Go
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

You can see that there is no difference between this screenshot and the above screenshot except for a little difference in the FreeSpace for the transaction log file.

SQL Server databas file space post table creation

Now run the below DBCC command to check the allocated pages for each data file.

DBCC showfilestats

You can see the allocated pages of each data files has not changed.

used extents post SQL Server table creation

Step 5: Now we will insert some data into this table to fill each of the data files. Run the below command to insert 10,000 rows to table Test_Data.

USE Manvendra
go
INSERT INTO Test_DATA DEFAULT VALUES;
GO 10000

Step 6: Once data is inserted we will check the available free space in each data file and the total allocated pages of each data file.

USE Manvendra
Go
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

You can see the difference between the screenshot below and the above screenshot. Free space in each data file has been reduced and the same amount of space has been allocated from both of the secondary data files, because both files have the same amount of free space and proportional fill works based on the free space within a file. 

SQL Server Database File space post data insert

Now run below DBCC command to check the allocated pages for each data files.

DBCC showfilestats

You can see a few more pages have been allocated for each data file. Now the primary data file has 41 extents and the secondary data files have a total of 10 extents, so total data saved so far is 51 extents. Both secondary data files have the same number of extents allocated which proves the proportional fill algorithm.

used extents post SQL Server data insert

Step 7: We can also see where data is stored for table "Test_Data" for each data file by running the below DBCC command. This will let us know that data is stored on all data files.

DBCC IND ('Manvendra', 'Test_data', -1);

I attached two screenshots because the number of rows was very large to show all data file IDs where data has been stored. File IDs are shown in each screenshot, so we can see each data page and their respective file ID. From this we can say that table Test_data is saved on all three data files as shown in the following screenshots.

SQL Server data table saved on which data files



Data table saved on particular SQL Server data files

Step 8: We will repeat the same exercise again to check space allocation for each data file. Insert an additional 10,000 rows to the same table Test_Data to check and validate the page allocation for each data file. Run the same command which we ran in step 5 to insert 10,000 more rows to the table test_data. Once the rows have been inserted, check the free space and allocated extents for each data file.

USE Manvendra
GO
INSERT INTO Test_DATA DEFAULT VALUES;
GO 10000
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

We can see again both secondary data files have the same amount of free space and similar amount of space has been allocated from the primary data file as well. This means SQL Server uses a proportional fill algorithm to fill data in to the data files.

SQL Server database file space post data insert

We can get the extent information again for the data files.

DBCC showfilestats

Again we can see in increase in the UsedExtents for all three of the data files.

used extents post SQL Server data insert
Next Steps
  • Create a test database and follow these steps, so you can better understand how SQL Server stores data at a physical and logical level. 
  • Explore more knowledge with SQL Server Database Administration Tips


posted by LifeisSimple
2016. 6. 6. 22:11 Brain Trainning/DataBase

50 Important Queries in SQL Server


출처 : http://www.c-sharpcorner.com/article/50-important-queries-in-sql-server/


In this article I will explain some general purpose queries. I think each developer should have knowledge of these queries. These queries are not related to any specific topic of SQL. But knowledge of such queries can solve some complex tasks and may be used in many scenarios, so I decided to write an article on these queries.

Query 1: Retrieve List of All Database

  1. EXEC sp_helpdb  

Example:

Example

Query 2: Display Text of Stored Procedure, Trigger, View 

  1. exec sp_helptext @objname = 'Object_Name'  

Example:

Example

Query 3: Get All Stored Procedure Relate To Database

  1. SELECT DISTINCT o.name, o.xtype  
  2.   
  3. FROM syscomments c  
  4.   
  5. INNER JOIN sysobjects o ON c.id=o.id  
  6.   
  7. WHERE o.xtype='P'  

Example:

Example

To retrieve the View use “V” instead of “P” and for functions use “FN.

Query 4: Get All Stored Procedure Relate To Table

  1. SELECT DISTINCT o.name, o.xtype  
  2.   
  3. FROM syscomments c  
  4.   
  5. INNER JOIN sysobjects o ON c.id=o.id  
  6.   
  7. WHERE c.TEXT LIKE '%Table_Name%' AND o.xtype='P'  

Example:

Example

To retrieve the View use “V” instead of “P” and for functions use “FN.

Query 5: Rebuild All Index of Database

  1. EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"  
  2.   
  3. GO  
  4.   
  5. EXEC sp_updatestats  
  6.   
  7. GO  

Example:

Example

Query 6: Retrieve All dependencies of Stored Procedure: 

This query return all objects name that are using into stored procedure like table, user define function, another stored procedure.

Query:

  1. ;WITH stored_procedures AS (  
  2.   
  3. SELECT  
  4.   
  5. oo.name AS table_name,  
  6.   
  7. ROW_NUMBER() OVER(partition by o.name,oo.name ORDER BY o.name,oo.nameAS row  
  8.   
  9. FROM sysdepends d  
  10.   
  11. INNER JOIN sysobjects o ON o.id=d.id  
  12.   
  13. INNER JOIN sysobjects oo ON oo.id=d.depid  
  14.   
  15. WHERE o.xtype = 'P' AND o.name LIKE '%SP_NAme%' )  
  16.   
  17. SELECT Table_name FROM stored_procedures  
  18.   
  19. WHERE row = 1  

Example:

Example

Query 7: Find Byte Size Of All tables in database

  1. SELECT sob.name AS Table_Name,  
  2.   
  3. SUM(sys.length) AS [Size_Table(Bytes)]  
  4.   
  5. FROM sysobjects sob, syscolumns sys  
  6.   
  7. WHERE sob.xtype='u' AND sys.id=sob.id  
  8.   
  9. GROUP BY sob.name  

Example:

Example

Query 8: Get all table that don’t have identity column:

Query:

  1. SELECT  
  2.   
  3. TABLE_NAME FROM INFORMATION_SCHEMA.TABLES  
  4.   
  5. where  
  6.   
  7. Table_NAME NOT IN  
  8.   
  9. (  
  10.   
  11. SELECT DISTINCT c.TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS c  
  12.   
  13. INNER  
  14.   
  15. JOIN sys.identity_columns ic  
  16.   
  17. on  
  18.   
  19. (c.COLUMN_NAME=ic.NAME))  
  20.   
  21. AND  
  22.   
  23. TABLE_TYPE ='BASE TABLE'  

Example:

Example

Query 9: List of Primary Key and Foreign Key for Whole Database

  1. SELECT  
  2.   
  3. DISTINCT  
  4.   
  5. Constraint_Name AS [Constraint],  
  6.   
  7. Table_Schema AS [Schema],  
  8.   
  9. Table_Name AS [TableName] FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE  
  10.   
  11. GO  

Example:

Example

Query 10: List of Primary Key and Foreign Key for a particular table

  1. SELECT  
  2.   
  3. DISTINCT  
  4.   
  5. Constraint_Name AS [Constraint],  
  6.   
  7. Table_Schema AS [Schema],  
  8.   
  9. Table_Name AS [TableName] FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE  
  10.   
  11. WHERE INFORMATION_SCHEMA.KEY_COLUMN_USAGE.TABLE_NAME='Table_Name'  
  12.   
  13. GO  

Example:

Example

Query 11: RESEED Identity of all tables

  1. EXEC sp_MSForEachTable '  
  2.   
  3. IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1  
  4.   
  5. DBCC CHECKIDENT (''?'', RESEED, 0)  

Example:

Example

Query 12: List of tables with number of records

  1. CREATE TABLE #Tab  
  2.   
  3. (  
  4.   
  5. Table_Name [varchar](max),  
  6.   
  7. Total_Records int  
  8.   
  9. );  
  10.   
  11. EXEC sp_MSForEachTable @command1=' Insert Into #Tab(Table_Name, Total_Records) SELECT ''?'', COUNT(*) FROM ?'  
  12.   
  13. SELECT * FROM #Tab t ORDER BY t.Total_Records DESC;  
  14.   
  15. DROP TABLE #Tab;  

Example:

Example

Query 13: Get the version name of SQL Server

  1. SELECT @@VERSION AS Version_Name  

Example:

Example

Query 14: Get Current Language of SQL Server

  1. SELECT @@LANGUAGE AS Current_Language;  

Example:

Example
Query 15: Disable all constraints of a table

  1. ALTER TABLE Table_Name NOCHECK CONSTRAINT ALL  

Example:

Example

Query16: Disable all constraints of all tables

  1. EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'  

Example:

ExampleQuery 17: Get Current Language Id

  1. SELECT @@LANGID AS 'Language ID'  

Example:

Example

Query18: Get precision level used by decimal and numeric as current set in Server:

  1. SELECT @@MAX_PRECISION AS 'MAX_PRECISION'  

Example:

Example

Query 19: Return Server Name of SQL Server

  1. SELECT @@SERVERNAME AS 'Server_Name'  

Example:

Example

Query 20: Get name of register key under which SQL Server is running

  1. SELECT @@SERVICENAME AS 'Service_Name'  

 

Example:

Example

Query 21: Get Session Id of current user process

  1. SELECT @@SPID AS 'Session_Id'  

Example:

Example

Query22: Get Current Value of TEXTSIZE option

  1. SELECT @@TEXTSIZE AS 'Text_Size'  

Example:

Example

Query 23: Retrieve Free Space of Hard Disk

  1. EXEC master..xp_fixeddrives  

Example:

example

Query24: Disable a Particular Trigger

Syntax:

  1. ALTER TABLE Table_Name DISABLE TRIGGER Trigger_Name  

Example:

  1. ALTER TABLE Employee DISABLE TRIGGER TR_Insert_Salary  

Query 25: Enable a Particular Trigger

Syntax:

  1. ALTER TABLE Table_Name ENABLE TRIGGER Trigger_Name  

Example:

  1. ALTER TABLE Employee ENABLE TRIGGER TR_Insert_Salary  

Query 26: Disable All Trigger of a table

We can disable and enable all triggers of a table using previous query, but replacing the "ALL" instead of trigger name.

Syntax:

  1. ALTER TABLE Table_Name DISABLE TRIGGER ALL  

Example:

  1. ALTER TABLE Demo DISABLE TRIGGER ALL  

Query 27: Enable All Trigger of a table

  1. ALTER TABLE Table_Name ENABLE TRIGGER ALL  

Example:

  1. ALTER TABLE Demo ENABLE TRIGGER ALL  

Query 28: Disable All Trigger for database

Using sp_msforeachtable system stored procedure we enable and disable all triggers for a database.

Syntax:

  1. Use Database_Name  
  2.   
  3. Exec sp_msforeachtable "ALTER TABLE ? DISABLE TRIGGER all"  

Example:

example

Query29: Enable All Trigger for database

  1. Use Demo  
  2.   
  3. Exec sp_msforeachtable "ALTER TABLE ? ENABLE TRIGGER all"  

Example:

example

Query30: List of Stored procedure modified in last N days

  1. SELECT name,modify_date  
  2.   
  3. FROM sys.objects  
  4.   
  5. WHERE type='P'  
  6.   
  7. AND DATEDIFF(D,modify_date,GETDATE())< N  

Example:

example

Query31: List of Stored procedure created in last N days

  1. SELECT name,sys.objects.create_date  
  2.   
  3. FROM sys.objects  
  4.   
  5. WHERE type='P'  
  6.   
  7. AND DATEDIFF(D,sys.objects.create_date,GETDATE())< N  

Example:

Example

Query32: Recompile a stored procedure

  1. EXEC sp_recompile'Procedure_Name';  
  2.   
  3. GO  

Example:

Example

Query 33: Recompile all stored procedure on a table

  1. EXEC sp_recompile N'Table_Name';  
  2.   
  3. GO  

Example:

Example

Query 34: Get all columns of a specific data type:

Query:

  1. SELECT OBJECT_NAME(c.OBJECT_ID) as Table_Name, c.name as Column_Name  
  2.   
  3. FROM sys.columns AS c  
  4.   
  5. JOIN sys.types AS t ON c.user_type_id=t.user_type_id  
  6.   
  7. WHERE t.name = 'Data_Type'  

Example:

Example

Query 35: Get all Nullable columns of a table

  1. SELECT OBJECT_NAME(c.OBJECT_ID) as Table_Name, c.name as Column_Name  
  2.   
  3. FROM sys.columns AS c  
  4.   
  5. JOIN sys.types AS t ON c.user_type_id=t.user_type_id  
  6.   
  7. WHERE c.is_nullable=0 AND OBJECT_NAME(c.OBJECT_ID)='Table_Name'  

Example:

Example

Query 36: Get All table that don’t have primary key

  1. SELECT name AS Table_Name  
  2.   
  3. FROM sys.tables  
  4.   
  5. WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasPrimaryKey') = 0  
  6.   
  7. ORDER BY Table_Name;  

Example:

Example

Query 37: Get All table that don’t have foreign key

  1. SELECT name AS Table_Name  
  2.   
  3. FROM sys.tables  
  4.   
  5. WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasForeignKey') = 0  
  6.   
  7. ORDER BY Table_Name;  

Example:

Example

Query 38: Get All table that don’t have identity column

  1. SELECT name AS Table_Name  
  2.   
  3. FROM sys.tables  
  4.   
  5. WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasIdentity') = 0  
  6.   
  7. ORDER BY Table_Name;  

Example:

Example
Query 39: Get First Date of Current Month

  1. SELECT CONVERT(VARCHAR(25),DATEADD(DAY,-(DAY(GETDATE()))+1,GETDATE()),105) First_Date_Current_Month;  

Example:

Example

Query 40: Get last date of previous month

  1. SELECT CONVERT(VARCHAR(25),DATEADD(DAY,-(DAY(GETDATE())),GETDATE()),105) Last_Date_Previous_Month;  

Example:

Example

Query 41: Get last date of current month

  1. SELECT CONVERT(VARCHAR(25),DATEADD(DAY,-(DAY(GETDATE())), DATEADD(MONTH,1,GETDATE())),105) Last_Date_Current_Month;  

Example:

Example

Query 42: Get first date of next month

  1. SELECT CONVERT(VARCHAR(25),DATEADD(DAY,-(DAY(GETDATE())), DATEADD(MONTH,1,GETDATE())+1),105) First_Date_Next_Month;  

Example:

Example

Query 43: Swap the values of two columns

  1. UPDATE Table_Name SET Column1=Column2, Column2=Column1  

Example:

Example

Query 44: Remove all stored procedure from database

  1. Declare @Drop_SP Nvarchar(MAX)  
  2.   
  3. Declare My_Cursor Cursor For Select [nameFrom sys.objects where type = 'p'  
  4.   
  5. Open My_Cursor  
  6.   
  7. Fetch Next From My_Cursor Into @Drop_SP  
  8.   
  9. While @@FETCH_STATUS= 0  
  10.   
  11. Begin  
  12.   
  13. Exec('DROP PROCEDURE ' + @Drop_SP)  
  14.   
  15. Fetch Next From My_Cursor Into @Drop_SP  
  16.   
  17. End  
  18.   
  19. Close My_Cursor  
  20.   
  21. Deallocate My_Cursor  

Example:

Example
Query 45: Remove all views from database

  1. Declare @Drop_View Nvarchar(MAX)  
  2.   
  3. Declare My_Cursor Cursor For Select [nameFrom sys.objects where type = 'v'  
  4.   
  5. Open My_Cursor  
  6.   
  7. Fetch Next From My_Cursor Into @Drop_View  
  8.   
  9. While @@FETCH_STATUS = 0  
  10.   
  11. Begin  
  12.   
  13. Exec('DROP VIEW ' + @Drop_View)  
  14.   
  15. Fetch Next From My_Cursor Into @Drop_View  
  16.   
  17. End  
  18.   
  19. Close My_Cursor  
  20.   
  21. Deallocate My_Cursor  

Example:

Example

Query 46: Drop all tables

  1. EXEC sys.sp_MSforeachtable @command1 = 'Drop Table ?'  

Example:

Example

Query 47: Get information of tables’ columns

  1. SELECT * FROM INFORMATION_SCHEMA.COLUMNS  
  2.   
  3. WHERE INFORMATION_SCHEMA.COLUMNS.TABLE_NAME=’Table_Name’  

Example:

Example

Query 48: Get all columns contain any constraints

  1. SELECT TABLE_NAME,COLUMN_NAME,CONSTRAINT_NAME FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE  

Example:

Example

Query 49: Get all tables that contain a view

  1. SELECT * FROM INFORMATION_SCHEMA.VIEW_TABLE_USAGE  

Example:

Example

Query 50: Get all columns of table that using in views

  1. SELECT * FROM INFORMATION_SCHEMA.VIEW_COLUMN_USAGE  

Example:

Example

Read more articles on SQL Queries:


posted by LifeisSimple
2013. 2. 20. 14:09 Brain Trainning/DataBase

DB 서버 전체 간단한 복구 방법

  • MSSQL 을 기존 서버와 동일한 위치에 설치 (드라이브 폴더 모두)
    - master DB에는 각 DB들의 기존 정보가 들어 있기 때문에 Restore 하고 난 이후에 서비스를 올리면 기존 위치에서 DB들을 찾게 됨.
    - 새로 설치되는 DB는 기존 서버의 버전과 동일해야 master 가 복원됨
  • SQL Server 서비스를 Single 모드로 실행 -m 옵션
  • Restore Commander 로 Master DB를 복원 (미리 동일한 드라이브 및 위치에 MDF, LDF 들이 있으면 master 복구후에 바로 DB들을 서비스 할 수 있음)
  • SQL Server를 정상시작
  • msdb 복원.
  • 사용자 DB파일들이 만일 기존 서버와 동일한 위치라면 별도 작업 필요 없음.


msdb 를 복원하고 나서 이런 오류가 뜬다면.. 

Service Broker is disabled in MSDB or MSDB failed to start

이런 오류를 만나면 아래의 내용을 참고하면 됨. (아래의 링크는 접기 안에 내용을 첨부했습니다.)

출처 : http://www.lifeasbob.com/code/kb_articles.aspx?article_Id=60&CatId=39


posted by LifeisSimple
2013. 2. 5. 11:28 Brain Trainning/DataBase

2000 버전에서의 Sysfiles 정보


Contains one row for each file in a database. This system table is a virtual table; it cannot be updated or modified directly.

Column nameData typeDescription
fileidsmallintFile identification number unique for each database.
groupidsmallintFilegroup identification number.
sizeintSize of the file (in 8-KB pages).
maxsizeintMaximum file size (in 8-KB pages). A value of 0 indicates no growth, and a value of -1 indicates that the file should grow until the disk is full.
growthintGrowth size of the database. A value of 0 indicates no growth. Can be either the number of pages or the percentage of file size, depending on value of status. If status contains 0x100000, then growthis the percentage of file size; otherwise, it is the number of pages.
statusintStatus bits for the growth value in either megabytes (MB) or kilobytes (K).

0x1 = Default device.
0x2 = Disk file.
0x40 = Log device.
0x80 = File has been written to since last backup.
0x4000 = Device created implicitly by the
CREATE DATABASE statement.
0x8000 = Device created during database
creation.
0x100000 = Growth is in percentage, not pages.

perfintReserved.
namenchar(128)Logical name of the file.
filenamenchar(260)Name of the physical device, including the full path of the file.

posted by LifeisSimple
2013. 1. 22. 17:32 Brain Trainning/DataBase


General SQL Server Performance Tuning Tips

BY BRAD MCGEHEE

When your transaction log grows large and you want a quick way to shrink it, try this option. Change the database recovery mode of the database you want to shrink from “full” to “simple,” then truncate the log file by performing a full backup of the database, then switch back to the “full” recovery mode. By temporally changing from the Full recovery model to the Simple recovery mode, and then back, SQL Server will only keep the “active” portion of the log, which is very small.  Contributed by Tom Kitta.  

*****

If you need to delete all the rows in a table, don’t use DELETE to delete them, as the DELETE statement is a logged operation and can take a significant amount of time, especially if the table is large. To perform the same task much faster, use the TRUNCATE TABLE instead, which is not a logged operation. Besides deleting all of the records in a table, this command will also reset the seed of any IDENTITY column back to its original value.

After you have run the TRUNCATE TABLE statement, it is important then to manually update the statistics on this table using UPDATE STATISTICS. This is because running TRUNCATE TABLE will not reset the statistics for the table, which means that as you add data to the table, the statistics for that table will be incorrect for a time period. Of course, if you wait long enough, and if you have Auto Update Statistics turned on for the database, then the statistics will eventually catch up with themselves. But this may not happen quickly, resulting in slowly performing queries because the Query Optimizer is using out-of-date statistics.  

*****

If you use TRUNCATE TABLE instead of DELETE to remove all of the rows of a table, TRUNCATE TABLE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE. Here’s a generic script that will drop all existing Foreign Key constraints on a specific table:

CREATE TABLE dropping_constraints 

cmd VARCHAR(8000) 
)

INSERT INTO dropping_constraints 
SELECT 
‘ALTER TABLE [' + 
t2.Table_Name + 
'] DROP CONSTRAINT ‘ + 
t1.Constraint_Name 
FROM 
INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS t1 
INNER JOIN 
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE t2 
ON 
t1.CONSTRAINT_NAME = t2.CONSTRAINT_NAME 
WHERE t2.TABLE_NAME=’your_tablename_goes_here’ 
DECLARE @stmt VARCHAR(8000) 
DECLARE @rowcnt INT 
SELECT TOP 1 @stmt=cmd FROM dropping_constraints 
SET @rowcnt=@@ROWCOUNT 
WHILE @rowcnt<>0 
BEGIN 
EXEC (@stmt) 
SET @stmt = ‘DELETE FROM dropping_constraints WHERE cmd =’+ QUOTENAME(@stmt,””) 
EXEC (@stmt) 
SELECT TOP 1 @stmt=cmd FROM dropping_constraints 
SET @rowcnt=@@ROWCOUNT 
END 
DROP TABLE dropping_constraints

The above code can also be extended to drop all FK constraints in the current database. To achieve this, just comment the WHERE clause.  

*****

Don’t run a screensaver on your production SQL Servers, it can unnecessarily use CPU cycles that should be going to your application. The only exception to this is the “blank screen” screensaver, which is OK to use.  

*****

Don’t run SQL Server on the same physical server that you are running Terminal Services, or Citrix software. Both Terminal Services and Citrix are huge resource hogs, and will significantly affect the performance of SQL Server. Running the administrative version of Terminal Services on a SQL Server physical server, on the other hand, is OK, and a good idea from a convenience point of view. As is mentioned in other parts of this website, ideally, SQL Server should run on a dedicated physical server. But if you have to share a SQL Server with another application, make sure it is not Terminal Services or Citrix.  

*****

Use sp_who or sp_who2 (sp_who2 is not documented in the SQL Server Books Online, but offers more details than sp_who) to provide locking and performance-related information about current connections to SQL Server. Sometimes, when SQL Server is very busy, you can’t use Enterprise Manager or Management Studio to view current connection activity via the GUI, but you can always use these two commands from Query Analyzer or Management Studio, even when SQL Server is very busy.  

*****

SQL Server uses its own internal thread scheduler (called the UMS) when running in either native thread mode or in fiber mode. By examining the UMS’s Scheduler Queue Length, you can help determine whether or not that the CPU or CPUs on the server are presenting a bottleneck.

This is similar to checking the System Object: Processor Queue Length in Performance Monitor. If you are not familiar with this counter, what this counter tells you is how many threads are waiting to be executed on the server. Generally, if there are more than two threads waiting to be executed on a server, then that server can be assumed to have a CPU bottleneck.

The advantage of using the UMS’s Schedule Queue Length over the System Object: Processor Queue Length is that it focuses strictly on SQL Server threads, not all of the threads running on a server.

To view what is going on inside the UMS, you can run the following undocumented command:

DBCC SQLPERF(UMSSTATS)

For every CPU in your server, you will get Scheduler. Each Scheduler will be identified with a number, starting with 0. So if you have four CPUs in your server, there will be four Schedulers listed after running the above command, Schedulers ID 0 through 3.

The “num users” tells you the number of SQL threads there are for a specific scheduler.

The “num runnable,” or better known as the “Scheduler Queue Length,” is the key indicator to watch. Generally, this number will be 0, which indicates that there are no SQL Server threads waiting to run. If this number is 2 or more, this indicates a possible CPU bottleneck on the server. Keep in mind that the values presented by this command are point data, which means that the values are only accurate for the split second when they were captured, and will be always changing. But if you run this command when the server is very busy, the results should be indicative of what is going on at that time. You may want to run this command multiple time to see what is going on over time.

The “num workers” refers to the actual number of worker threads there are in the thread pool.

The “idle workers” refers to the number of idle worker threads.

The “cntxt switches” refers to the number of context switches between runnable threads.

The “cntxt switches(idle)” refers to the number of context switches to “idle” threads.

As you can see, this command is for advanced users, and is just one of many tools that can be used to see internally how SQL Server is performing.  

Continues…

SQL Server  offers a system table-valued function that provides statistical information on the I/O activity of specific database files. It is called fn_virtualfilestats. What is interesting about this function is that it is specific to a database file. For example, if you want to see the I/O activity of a particular user database, you can. Keep in mind that all SQL Server databases have at least two physical files (for the database and log), and can have many more, depending on how the database was created. When using this function, you have to not only specify the database, but the file within the database. This is very specific I/O statistics indeed. To run this function, use the syntax found below:

SELECT * FROM :: fn_virtualfilestats(dbid, fileid)

Where:

dbid: The database ID of the database you want to examine.

fileid: The file ID of the physical files that make up your database. At a minimum, each database has at least two files: the database file (MDF file) and a log file (LDF). A database can have many files, and the file number refers to the number of the physical file that you want to examine.

To identify the dbid and fileid you want to use in the above statement, run the following SELECT statement. It will provide you with a list of all of the database names, database ids, and file ids (for each database) on your server.

SELECT sysdatabases.name AS Database_Name, 
     sysdatabases.dbid AS Database_ID, 
     sysaltfiles.fileid AS File_ID 
FROM sysaltfiles INNER JOIN sysdatabases 
     ON sysaltfiles.dbid = sysdatabases.dbid 
ORDER BY sysdatabases.name, sysaltfiles.fileid

Once you run this function, a wide variety of information is available, including:

NumberReads: The number of physical reads made against this file since the last time SQL Server was restarted.

NumberWrites: The number of physical writes made against this file since the last time SQL Server was restarted.

BytesRead: The number of bytes read from this file since the last time SQL Server was restarted.

BytesWritten: The number of writes to this file since the last time SQL Server was restarted.

IoStallMS: The total amount of time that users have waited for I/Os to complete for this file (in milliseconds).

The first four statistics can give you a feel for how busy a particular file is. This can come in handy when comparing multiple filegroups in a database and to see how balanced the I/O is to each file. To make the most of filegroups, I/O should be spread out among the various files for best overall performance. The last statistic, IoStallMS, is best used to find out if you have a bottleneck in your transaction log, as demonstrated below:

SELECT IoStallMS / (NumberReads+NumberWrites) as IsStall 
FROM :: fn_virtualfilestats(dbid, fileid)

Where:

dbid: The database ID of the database you want to examine.

fileid: The file ID of the transaction log of the database being examined.

Essentially, if IsStall is > 20ms, then this indicates that the I/O to the transaction log is becoming a bottleneck, which in turn can lead to major concurrently problems in the database.

To help optimize the transaction log, assuming it has become a bottleneck, consider doing the following:

• Place the transaction log on a faster disk array. 
• Place the transaction log on a dedicated disk array (no other files other than the transaction log). This allows sequential writes to occur as fast as possible, helping to boost I/O performance. 
• Turn on write-back caching on your disk controller, but only if it is backed up with a battery and has been tested to work with SQL Server. Normally, write-back caching is turned off because it can cause database corruption should the server crash. 
 

*****

Sometimes, a user thread has to wait until the resources it needs are available. Hopefully, this won’t happen often, but it is a fact of life. But sometimes, long waits can indicate potential performance problems that can be corrected, if you know where to look. Long waits can be caused by blocking locks, slow I/O, and other factors.

Fortunately, you can access the amount of time a user thread has to wait, which can tell you which user thread, if any, is taking more time that it should. For example, the query below can be run to identify any user threads that have to wait more than one second:

SELECT spid, waittime, lastwaittype 
FROM master..sysprocesses 
WHERE waittime > 1000

When you run the above query, all of the processes that have been waiting for greater than 1 second (1000 milliseconds) — both system and user — will be displayed. You will want to ignore system processes, focusing your efforts on spids that represent specific user threads. Spid will give you the user ID (you will have to match the spid to the appropriate user), waittime is the number of milliseconds that this user thread has been waiting, and lastwaittype will give you a clue as to what is waiting to occur.

In the above query, I have used 1000 milliseconds as a cutoff point, but you can use any amount of time you want. Generally speaking, any user thread that has been waiting for five seconds or more should definitely be evaluated for potential performance issues. Of course, you can choose any threshold you want.  

Continues…

By default, you cannot use a UNC (Universal Naming Convention) name to specify a location of where to store a SQL Server database or log file. Instead, you must specify a drive letter that refers to a local physical drive or array. But what if you want to store your database or log file on another Windows Server or a Network Appliance, Inc. storage system? You can, but you will have to set Trace Flag 1807 on your SQL Server to allow the use of UNC names.

*****

For a quick and dirty way to check to see if your SQL Server has maxed out its memory (and is causing your server to page), try this. Bring up the Task Manager and go to the “Performance” tab.

Here, check out two numbers: the “Total” under “Commit Charge (k)” and the “Total” under “Physical Memory (k)”. If the “Total” under “Commit Charge (k)” is greater than the “Total” under “Physical Memory (k)”, then your server does not have enough physical memory to run efficiently as it is currently configured and is most likely causing your server to page unnecessarily. Excess paging will slow down your server’s performance.

Another number to make note of is the “Available Physical Memory (K). This number should be 4MB or higher. If it is not, then your SQL Server is most likely suffering from a lack of physical RAM, hurting performance, and more RAM needs to be added.

If you notice this problem, you will probably want to use System Monitor to further investigate the cause of this problem. You will also want to check to see how much physical memory has been allocated to SQL Server. Most likely, this setting has been set incorrectly, and SQL Server has been set to use too much physical memory. Ideally, SQL Server should be set to allocate physical RAM dynamically.  

*****

When performance tuning a SQL Server, it is often handy to know if the disk I/O of your servers (and the databases on it) are mostly reads or mostly writes. This information can be used to calculate the ratio of writes to reads of your server, and this ratio can affect how you might want to tune your SQL Server. For example, if you find that your server is heavy on the writes, then you will want to avoid RAID 5 if you can, and use RAID 10 instead. This is because RAID 5 is much less efficient that RAID 10 at writes. But if your server has a much greater number of reads than writes, then perhaps a RAID 5 system is more than adequate.

One of the quickest ways to find out the ratio of reads to writes on your SQL Servers is to run Task Manager and look at the sqlservr.exe process (this is the mssqlserver or sqlserv service) and view the total number of I/O Read Bytes and I/O Write Bytes. If you don’t see this in Task Manager, go to View|Select Column, and add these two columns to Task Manager.

The results you see tell you how many bytes of data have been written and read from the SQL Server service since it was last restarted. Because of this, you don’t want to read this figure immediately after starting the SQL Server service, but after several days of typical use.

In one particular case I looked at, the SQL Server had 415,006,801,908 I/O bytes read and 204,669,746,458 bytes written. This server had about one write for every two reads. In this case, RAID 5 is probably a good compromise in performance, assuming that RAID 10 is not available from a budget perspective. But if the reverse were true, and there were two writes for every one read, then RAID 10 would be needed for best overall performance of SQL Server.  

*****

Internet Information Server (IIS) has the ability to send its log files directly to SQL Server for storage. Busy IIS servers can actually get bogged down trying to write log information directly to SQL Server. Because of this, it is generally not recommended to write web logging information to SQL Server. Instead, logs should be written to text files, and later imported into SQL Server using BCP, DTS, or SSIS.  

*****

SQL Server has a database compatibility mode that allows applications written for previous versions of SQL Server to run under the current version of SQL Server. In you want maximum performance for your database, you don’t want to run your database in compatibility mode (not all new performance-related features are supported).

Instead, your databases should be running in the current native SQL Server mode. Of course, this may require you to modify your application to make it compliant to the version of SQL Server you are running, but in most cases, the additional work required to update your application will be more than paid for with improved performance.  

*****

When experimenting with the tuning of your SQL Server, you may want to run the DBCC DROPCLEANBUFFERS command to remove all the test data from SQL Server’s data cache (buffer) between tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.

If you want to clear out all of the stored procedure cache, use this command, DBCC FREEPROCCACHE. If you only want to clear out the stored procedure cache for a single database (not the entire server) use DBCC FLUSHPROCINDB. All of these commands are for testing purposes and should not be run on a production SQL Server.  

*****

Orphan SQL Server sessions can negatively affect SQL Server’s performance. An orphan SQL Server session can occur when a client improperly disconnects from SQL Server, such as when the client loses power. When this happens, the client cannot tell SQL Server to properly close the connection, so the SQL Server connection remains open, even though it is not being used.

This can affect SQL Server’s performance two ways. First, orphaned connections use up SQL Server connections, which takes up server resources. Secondly, it is possible that the orphan connections may be holding locks that block other users; or temp tables or cursors may be held open that also take up unnecessary server resources.

The OS periodically checks for inactive SQL Server sessions, and if it finds any, it will notify SQL Server so that the connection can be removed. Unfortunately, this only occurs every 1-2 hours, depending on the protocol used. If orphaned SQL Server sessions become a problem, Windows Server’s registry can be modified so that it checks more often for orphaned connections.

Identifying an orphaned connection from SQL Server is very difficult, but if you can identify it, it can be removed by KILLing the connection using SSMS or by running the KILL statement.  

Continues…

For best performance, don’t mix production databases and development (test or staging) databases on the same physical server. This not only serves to better separate the two functions (production and development), but prevents developers from using up server resources that could be better used by production users.  

*****

When we think of performance, we usually think about speeding up our application’s performance. But another way to look at performance is to look at our performance as DBAs or Transact-SQL developers.

For example, one of the easiest ways to speed up our Transact-SQL coding, in addition to making it easier to maintain and troubleshoot our code once it is written, is to format it in an easy to read format.

While there are many different code formatting guidelines available, here are some basic ones you should consider following, if you aren’t doing so already:

Begin each line of your Transact-SQL code with a SQL verb, and capitalize all Transact-SQL statements and clauses, such as:

SELECT customer_number, customer_name 
FROM customer 
WHERE customer_number > 1000 
ORDER BY customer_number

If a line of Transact-SQL code is too long to fit onto one line, indent the following line(s), such as:

SELECT customer_number, customer_name, customer_address, 
     customer_state, customer_zip, customer_phonenumber

Separate logical groupings of Transact-SQL code by using appropriate comments and documentation explaining what each grouping goes.

These are just a few of the many possible guidelines you can follow when writing your Transact-SQL code to make it more readable by you and others. You just need to decide on some standard, and then always follow it in your coding. If you do this, you will definitely boost your coding performance.

*****

Be wary of allowing users to directly access your databases (especially OLTP databases) with third-party database access tools, such as Microsoft Excel or Access. Many of these tools can wreck havoc with your database’s performance. Here are some reasons why:

• Often these users aren’t experienced using these tools, and create overly complex queries that eat up server resources. At the other extreme, their queries may not be complex enough (such as lacking effective WHERE clauses) and return thousands, if not millions, or unnecessary rows of data. 
• This reporting activity can often lock rows, pages or tables, creating user contention for data and reducing the database’s performance. 
• These tools are often file-based. This means that even if an effective query is written, the entire table (or multiple tables in the case of joins) has to be returned to the client software where the query is actually performed, not at the server. Not only can this lead to excess server activity, it can also play havoc with your network.

If you have no choice but to allow users access to your data, try to avoid them hitting your production OLTP databases. Instead, point them to a “reporting” server that has been replicated, or is in the form of a datamart or data warehouse.  

*****

SQL Server   offers support of SSL encryption between clients and the server. While selecting this option prevents the data from being viewed, it also adds additional overhead and reduces performance. Only use SSL encryption if absolutely required. If you need to use SSL encryption, consider purchasing a SSL encryption processor for the server to speed performance.  

*****

SQL Server   supports named instances of SQL Server. You can run up to 16 concurrent instances of SQL Server   on the same server.

As you might imagine, each running instance of SQL Server takes up server resources. Although some resources are shared by multiple running instances, such as MSDTC and the Microsoft Search services, most are not. Because of this, each additional instance of SQL Server running on the same server has to fight for available resources, hurting performance.

For best performance, run only a single instance (usually the default) on a single physical server. The main reasons for using named instances is for upgrading older versions of SQL Server to newer versions of SQL Server, transition periods where you need to test your applications on multiple versions of SQL Server, and for use on development servers.  

*****

If you run the ALTER TABLE DROP COLUMN statement to drop a variable length or text column, did you know that SQL Server will not automatically reclaim this space after performing this action? To reclaim this space, which will help to reduce unnecessary I/O due to the wasted space, you can run the following command:

DBCC CLEANTABLE (database_name, table_name)

Before running this command, you will want to read about it in Books Online to learn about some of its options that may be important to you.

*****

Trace flags, which are used to enable and disable some special database functions temporarily, can sometimes chew up CPU utilization and other resources on your SQL Server unnecessarily. If you just use them for a short time to help diagnose a problem, for example, and then turn them off as soon as you are done using them, then the performance hit you experience is small and temporary.

What happens sometimes is that you, or another DBA, turns on a trace flag, but forgets to turn it off. This of course, can negatively affect your SQL Server’s performance. If you want to check to see if there are any trace flags turned on a SQL Server, run this command in Query Analyzer:

DBCC TRACESTATUS(-1)

If there are any trace flags on, you will see them listed on the screen after running this command. DBCC TRACESTATUS only finds traces created at the client (connection) level. If a trace has been turned on for an entire server, this will not show up.

If you find any, you can turn them off using this command:

DBCC TRACEOFF(number of trace)

*****

SQL Server offers a feature called the black box. When enabled, the black box creates a trace file of the last 128K worth of queries and exception errors. This can be a great tool for troubleshooting some SQL Server problems, such as crashes.

Unfortunately, this feature uses up SQL Server resources to maintain the trace file than can negatively affect its performance. Generally, you will only want to turn the black box on when troubleshooting, and turn it off during normal production. This way, your SQL Server will be minimally affected.  

Continues…

If you have ever performed a SELECT COUNT(*) on a very large table, you know how long it can take. For example, when I ran the following command on a large table I manage:

SELECT COUNT(*) from <table_name>

It took 1:09 to count 10,725,948 rows in the table. At the same time, SQL Server had to perform a lot of logical and physical I/O in order to perform the count, chewing up important SQL Server resources.

A much faster, and more efficient, way of counting rows in a table is to run the following query:

SELECT rows 
FROM sysindexes 
WHERE id = OBJECT_ID(‘<table_name>’) AND indid < 2

When I run the query against the same table, it takes less than a second to run, and it gave me the same results. Not a bad improvement and it took virtually no I/O. This is because the row count of your tables is stored in the sysindexes system table of your database. So instead of counting rows when you need to, just look up the row count in the sysindexes table.

There is one potential downside to using the sysindexes table. And that this system table is not updated in real time, so it might underestimate the number of rows you actually have. Assuming you have the database option turned on to “Auto Create Statistics” and “Auto Update Statistics,” the value you get should be very close to being correct, if not correct. If you can live with a very close estimate, then this is the best way to count rows in your tables.

*****

Do you use Enterprise Manager to access remote servers, possibly over a slow WAN link? If you do, have you ever had any problems getting SSMS to connect to the remote server?

If so, the problem may lay in the fact that if Enterprise Manager cannot make a connection within 4 seconds, then the connection attempt fails. To overcome slow network connections, you can change the default SSMS timeout value from 4 seconds to any amount of time you like.

To change the default timeout value, select Tools > Options from Enterprise Manager, and then select   the “Advanced” tab. Here, change the “Login time-out (seconds)” option to a higher number.  

*****

SQLDIAG.exe is a command line tools that collects information about SQL Server and writes it to a text file. It can be useful for documenting or troubleshooting your SQL Server. When you run this command when SQL Server is running, the following information is collected and stored in a text file called sqldiag.txt, which is stored in the mssqllog folder.

The text of all error logs 
SQL Server registry information 
SQL Server dll version information 
The output from these system stored procedures:

sp_configure 
sp_who 
sp_lock 
sp_helpdb 
xp_msver 
sp_helpextendedproc 
sysprocesses 
Input buffer SPIDs/deadlock information 
Microsoft diagnostics report for the server 
The last 100 queries and exceptions (if the query history trace was running) 
 

*****

Memory leaks can steal valuable memory from your SQL Server, reducing performance, and perhaps even forcing you to reboot your server. A memory leak occurs when a poorly-written or buggy program requests memory from the operating system, but does not release the memory when it is done with it. Because of this, the application can use up more and more memory in a server, greatly slowing it down, and even perhaps crashing the server.

Some memory leaks come from the operating system itself, device drivers, MDAC components, and even SQL Server. And of course, virtually any application can cause a memory leak, which is another good reason to dedicate a single server to SQL Server instead of sharing it among multiple applications.

Memory leaks are often hard to identify, especially if they leak memory slowly. Generally, memory leaks become apparent when you notice that your server is running out of available memory and paging becomes a big problem. A symptom of this is a SQL Server that runs quickly after being rebooted, but begins to run more and more slowly as time passes, and when the system is rebooted again, it speeds up again.

One way to help get rid of many memory leaks is to ensure that you always have the latest service packs or updates for your server’s software. But a memory leak you find may not have an immediate fix. If this is the case, you may be forced to reboot your server periodically in order to free up memory.

Identifying what is causing a memory leak is often difficult. One method involved using System Monitor to monitor all of the counters in the Memory object over time, seeing what is happening internally in your computer. Another method is to use Task Manager to view how much memory is used by each process. A process that seems to be using an unusual amount of memory may be the culprit.

*****

While SSMS can make some DBA and developer tasks a little bit easier, in other cases, it can cause performance problems. For example, if you decide to change the schema of a table from within SSMS, you may find out that doing so takes much longer than performing the same schema change by using the Transact-SQL ALTER command. This is because SSMS  may not use ALTER, but instead recreate an entire new table based on your new schema, and the move the data from the old table to the new table, which can take a long time for large tables. Generally, using ALTER is very efficient and schema changes occur quickly, even on large tables.

Because of this, consider using ALTER to modify the schemas of very large tables instead of SSMS.  

*****

If you want to see a detailed accounting of how memory is being used in your SQL Server, run this undocumented DBCC command:

DBCC MEMORYSTATUS

The results look similar to this:

Buffer Distribution          Value 
—————————— ———– 
Stolen                         2914 
Free                         3049 
Procedures                    43521 
Inram                         0 
Dirty                         8366 
Kept                         1209 
I/O                         0 
Latched                    0 
Other                         163981


Buffer Counts               Value 
—————————— ———– 
Commited                    223040 
Target                         223040 
Hashed                         173556 
InternalReservation          202 
ExternalReservation          0 
Min Free                    201


Procedure Cache               Value 
—————————— ———– 
TotalProcs                    28301 
TotalPages                    43521 
InUsePages                    19676


Dynamic Memory Manager     Value 
—————————— ———– 
Stolen                         46436 
OS                         507 
General                    7137 
QueryPlan                    39310 
Optimizer                    0 
Utilities                    13 
Connection                    483


Global Memory Objects          Value 
—————————— ———– 
Resource                    932 
Locks                         78 
XDES                         1 
SQLCache                    1195 
Replication                    2 
LockBytes                    2 
ServerGlobal               71


Query Memory Manager          Value 
—————————— ———– 
Grants                         0 
Waiting                    0 
Available                    150336 
Maximum                    150336


Optimization Queue          Value 
—————————— ———– 
Optimizing                    0 
Waiting                    0 
Available                    16 
Maximum                    16

The value refers to the number of 8K buffers. [7.0, 2000] Updated 5-7-2007

Continues…

If you would like to see what is stored in SQL Server’s stack, you can by running this undocumented SQL Server DBCC command:

DBCC STACKDUMP

When you run this command, a stack dump will be performed, writing a file named SQL00001.dmp in the SQL Server log folder. This is a text file you can view with Notepad.

*****

Many times it is useful to compare the performance of a query or stored procedure as you are testing it. This is especially true if the time differences are so small that it is hard for you to know for sure which query was faster or slower.

One of the easiest ways to measure the time it takes for a query to stored procedure to run is to find the starting and ending times, then find the difference between the two. This can be easily accomplished using a few extra lines of Transact-SQL code. Here’s how:

DECLARE @starttime datetime 
SELECT @starttime = GETDATE()

Your code here …

SELECT DATEDIFF(ms, @starttime, GETDATE()) AS ‘Total Time (ms)’

The results will be how long, in milliseconds, it took your code to run. SQL Server’s time function is only accurate to +- 3 milliseconds, so keep this mind.

*****

In SSMS, the most common way to script currently existing objects is to right-click on the database in question and select “All Tasks,” then choose “Generate SQL Script,” and so on, until you script the object you want. This works fine, but takes a lot of keystrokes.

Another way, from within SSMS, to script existing objects is to follow these steps instead:

• Open the database and right-click on the object you want to script, then click “Copy.” Or if you like keyboard shortcuts, press CTRL-C. 
• Open a copy of Query Analyzer, Notepad, or any text editor. 
• Now on the “Edit” menu from the application, click “Paste” (or press CTRL-V), and the scripted code appears, including any appropriate constraints.

This is a quick and easy way to script database objects you may find handy.

*****

Have you ever wondered how people find out about “undocumented” stored procedures? Have you ever wondered about how SQL Server works behind the scenes? And have you ever wanted to see some examples of how top-notch Transact-SQL developers write SQL code?

All of the above is at your fingertips, although you won’t see many references to it in Microsoft’s documentation. To accomplish all of the above, all you have to do is go to your master SQL Server database in SSMS, open the Stored Procedures Folder, and take a look. All SQL Server stored procedures can be seen here, even the ones not documented by Microsoft. To view the Transact-SQL code in a stored procedure, right-click on the stored procedure you are interested in and click Properties.

If you want to use Query Analyzer instead, all you have to do is to use this command:

USE master 
sp_helptext [system_stored_procedure_name]

In many cases, the Transact-SQL code you see is documented (although generally not in great detail), and if you know Transact-SQL, you should be able to follow the code and understand how it works. In some cases, you may need to experiment a little to better understand what is going on.

For example, here’s the Transact-SQL code for the sp_updatestats system stored procedure:

CREATE PROCEDURE sp_updatestats 
AS

DECLARE @dbsid varbinary(85)

SELECT @dbsid = sid 
FROM master.dbo.sysdatabases 
WHERE name = db_name()

/*Check the user sysadmin*/ 
IF NOT is_srvrolemember(‘sysadmin’) = 1 AND suser_sid() <> @dbsid 
BEGIN 
RAISERROR(15288,-1,-1,’SA or DBO’,'sp_updatestats’) 
RETURN (1) 
END

DECLARE @exec_stmt nvarchar(540) 
DECLARE @tablename sysname 
DECLARE @uid smallint 
DECLARE @user_name sysname 
DECLARE @tablename_header varchar(267) 
DECLARE tnames_cursor CURSOR FOR SELECT name, uid FROM sysobjects WHERE type = ‘U’ 
OPEN tnames_cursor 
FETCH NEXT FROM tnames_cursor INTO @tablename, @uid 
WHILE (@@fetch_status <> -1) 
BEGIN 
IF (@@fetch_status <> -2) 
BEGIN 
SELECT @user_name = user_name(@uid) 
SELECT @tablename_header = ‘Updating ‘ + @user_name +’.'+ RTRIM(LOWER(@tablename)) 
PRINT @tablename_header 
SELECT @exec_stmt = ‘UPDATE STATISTICS ‘ + quotename( @user_name , ‘[‘)+’.’ + quotename( @tablename, ‘[‘) 
EXEC (@exec_stmt) 
END 
FETCH NEXT FROM tnames_cursor INTO @tablename, @uid 
END 
PRINT ‘ ‘ 
PRINT ‘ ‘ 
raiserror(15005,-1,-1) 
DEALLOCATE tnames_cursor 
RETURN(0) — sp_updatestats

GO

Besides learning how system stored procedures work, and getting an opportunity to see professionally written code, (if you really like to get your hands dirty) you can also modify the code to do something slightly different. By this, I don’t mean to modify the code you see (that wouldn’t be prudent) but you can copy the code to a new stored procedure, modify it, and then save it with a different name.  

Continues…

On very rare occasions, you may get an error similar to this from SQL Server:

[Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 845: [Microsoft][ODBC SQL Server Driver][SQL Server]Time-out occurred while waiting for buffer latch type 3 for page (2:0), database ID 15.

This error message may also be accompanied by additional error messages, such as a file log or database backup that failed.

This error indicates that your disk I/O system is being overloaded and that SQL Server timed out when trying to write data to disk (via the OS). If this error is a rare event, then there is no need to worry.

But if this event becomes frequent, it could mean that your disk I/O subsystem has hit a bottleneck that it can’t overcome, or that your disk I/O subsystem itself is having problems. You need to find out which one of these is the most likely issue, and then deal with it accordingly.

One of the things that you might want to look for is what I/O activity was happening at the time of the error. For example, it is possible that a disk backup, a DBCC command, or other database activity was all happening at the same time, overwhelming the disk I/O. Perhaps the solution to the problem is as simple as rescheduling some scheduled tasks.  

*****

According to Microsoft, the way to shrink a log file is to use DBCC SHRINKFILE. This process is painfully slow and inefficient. Even after running the procedure numerous times, you may not get much free space.

Here’s another way to shrink your logs that is guaranteed to work every time. First, back up the database and then detach (sp_detach_db) it (you will have to bring the database down to do this). Next, delete the log file and then re-attach (sp_attach_db) the database, not providing the old log file location from within the sp_attach_db command. This will create a new log file with the old log file name in the old location with default size i.e. 512 KB.

To make sure there is no problem during the operation, the old log file can be renamed and kept until the database is reattached successfully. This provides a backup plan if for some reason SQL server fails to attach the database without the old log file.

This trick won’t work if the database has more than one log file, but if you need to, you can alter the database so that it only has a single log file, while will allow you to perform the above steps. After carrying out the detach and attach database activity, the database can be again altered to add more log files.   Contributed by Gaurav Bindlish.  

*****

In SQL Server’s Query Analyzer, a somewhat hidden, but useful feature exists that you may not be aware off, and that is the Transact-SQL Debugger. The reasons you may not have seen it before in Query Analyzer is that you don’t see this option in any standard menu, making it somewhat hard to find.

To use the Transact-SQL Debugger, you must first ensure that the Object Browser is displayed from within Query Analyzer. Next, using the Object Browser, open up a database that has one or more stored procedures, then right-click on the stored procedure’s name, and then select Debug. This is how you turn on debugging for a stored procedure.

Once you click on Debug, if the stored procedure you selected has one or more input parameters, you will be asked to enter them, and then you can begin the debugging process. The Transact-SQL Debugger has the typical features you find in most any debugging tool, including:

• Go: Puts the stored procedure into the debugging mode. 
• Toggle Breakpoint: Sets or removes a breakpoint at the current line in your code. You cannot set a breakpoint on lines containing non-executable code. 
• Remove All Breakpoints: Clears all the breakpoints in your code. 
• Step Into: Step Into executes the statement at the current execution point, and then executes one code statement at a time. If the statement is a call to another stored procedure, then the next statement displayed is the first statement in the stored procedure. 
• Step Over: Executes one code statement at a time. If the current statement contains a call to another stored procedure, Step Over executes the stored procedure as a unit, and then steps to the next statement in the current stored procedure. 
• Step Out: Executes the remaining lines of code in which the current execution point lies. The next statement displayed is the statement following the procedure call. All of the code is executed between the current and the final execution points. 
• Run to Cursor: Specifies a statement further down in your code where you want execution to end. 
• Restart: Restarts execution at the beginning of the stored procedure. 
• Stop Debugging: Stops debugging. 
• Watch: Displays the current watch expressions. 
• Callstack: Lists the procedures calls that have started, but have yet to be completed. 
• Auto Rollback: Automatically rolls back all work performed during execution of the stored procedure.

*****

Have you ever wanted to test the latest beta software, but didn’t want to put it on your “production” desktop because you were afraid it might crash it, and didn’t have a separate test box available to run it on? Or have you wanted more than one test environment, but didn’t have the necessary hardware?

In both of these cases, there is a new option that allows you to do both, while not requiring any more hardware. How? By using Microsoft Virtual PC 2007. This software, which is designed to run on most desktops and Microsoft operating systems, allows you to install virtual servers on your current desktop (assuming your hardware is big enough).

For example, if you are running Windows XP Workstation on your desktop, you can create one or more multiple virtual servers that run almost any other Microsoft operating system, along with your choice of application software, such as SQL Server running under Windows Server, or any combination you like.

Virtual servers don’t have to be active all the time, you can bring them up only when you want to, helping to preserve hardware resources. You can create as many virtual servers as you like, plus you can have as many running as you like, assuming your hardware will handle them.

If you decide to do this, there are two things you may want to keep in mind. First, when you create a virtual server, you must specify how the networking will work. There are several options. You may have to experiment with these options to select the one that works best for you. Also, you will want to install the optional Virtual Machine Additions into your virtual server, after it is built. This will greatly enhance the performance of the virtual software. Information on how to do this is available in the instructions.

Microsoft also offers a “server” version.  

*****

To quickly identify which, if any, of your procedure is executed when SQL Server starts, you can use the following query:

USE master 
GO 
SELECT 
name 
FROM 
sysobjects 
WHERE 
OBJECTPROPERTY(id, ‘ExecIsStartup’) = 1

However, you can also get the same result via the INFORMATION_SCHEMA views, which are generally preferable over directly querying the system tables. Here’s the query:

USE master 
GO 
SELECT 
ROUTINE_NAME 
FROM 
INFORMATION_SCHEMA.ROUTINES 
WHERE 
OBJECTPROPERTY(OBJECT_ID(ROUTINE_NAME), ‘ExecIsStartUp’) = 1

출처 : http://www.sql-server-performance.com/2007/gen-tips/7/

posted by LifeisSimple
2012. 9. 11. 13:55 Brain Trainning/DataBase


출처 : http://www.sqlservercentral.com/articles/Security/65169/

Configuring Kerberos Authentication

By Brian Kelley, 2011/03/25 (first published: 2008/12/11)

In my experience, configuring a SQL Server for Kerberos authentication, especially a SQL Server named instance, can be one of the most confusing things to do for a DBA or system administrator the first time around. The reason it can be so confusing is there are several "moving parts" that must all be in sync for Kerberos authentication to work. And what can make things all the more confusing is that in general, if we don't touch a thing, people and applications can connect to our database servers but as soon as we start down the road of configuring Kerberos authentication, they suddenly can't. And it can be rather frustrating to figure out why. In this article we'll look at both the hows and the whys.

If I Don't Do Anything, Why Does it Usually Work?

When it comes to authenticating a login (checking to see if you are who you say you are), SQL Server only does authentication when the login is a SQL Server based one. I've chosen my words carefully here, because it is important to understand that when it's a Windows-based login, SQL Server passes off the authentication to an operating system component, the Windows Security Support Provider Interface (SSPI). That's why when you have Kerberos authentication errors, you usually get some message about SSPI context. Basically, SQL Server realizes it's a Windows login, gets the information it'll need to pass on so SSPI can do it's checks, and then it waits to see what SSPI says. If SSPI says the login is good, SQL Server allows the login to complete the connection. If SSPI says the login is bad, SQL Server rejects the login and returns whatever error information SSPI provides. Now, there is one exception to SQL Server farming out Windows authentication to SSPI, but that occurs in Named Pipes and so we won't get into it because hopefully you're not using Named Pipes as your protocol.

Once we understand that SQL Server is handing off responsibility for authentication to SSPI, it's time to understand what SSPI is going to do. SSPI is going to first try and authenticate using Kerberos. This is the preferred protocol for Windows 2000 and above. In order to do this, there needs to be a Service Principal Name (SPN) in place. We'll talk more about that later. If there's no SPN, Kerberos can't happen. If Kerberos can't happen whether due to no SPN or another reason (across forests with no forest level trust), SSPI will drop back to the old security protocol, NT LAN Manager, or NTLM. So if we don't do anything, authentication will drop back to NTLM and everything tends to work. That is, until we have to do multiple "hops," like through SQL Server Reporting Services set up on a separate server or when we want to do Windows authentication across a linked server connection (see Figure 1).

Figure 1:

In Figure 1, the same set of credentials (Domain\User) is being passed from the client to a server and then from that server to a second server. Each time the credentials are passed, we call that a hop. Since we're not changing the credentials (for instance, we're not going to a second Windows account, such as a service account, or a SQL Server login, we say that there have been two hops, or what we call that a double hop situation. NTLM doesn't permit double hop situations (or triple or quadruple…); It is prevented by design. So in either of these particular scenarios, if we don't have Kerberos authentication set up, we can't make the second hop. We'll see errors logging in attributed to login (null)or NT AUTHORITY\ANONYMOUS LOGON. By default, Kerberos authentication only permits a single hop, but using a feature called Kerberos delegation, multiple hops can be configured and these double hop scenarios can be allowed. While Kerberos delegation is beyond the scope of this article, it is important to note that Kerberos delegation cannot happen without Kerberos authentication, and that's how DBAs usually get pulled into the fray.

What's So Bad About NTLM?

In general, NTLM (or at least, the revised versions) do a good job of authenticating the user and basically being secure. However, NTLM suffers from the following drawbacks:

  • It is susceptible to "replay" attacks.
  • It assumes the server is trustworthy.
  • It requires more authentication traffic than Kerberos.
  • It doesn't provide for a means of going past that first hop.

Let's look at each one of these to understand why they are drawbacks, starting with a replay attack. A replay attack is when an attacker is able to capture network traffic and re-use it. For instance, imagine I'm logging on to your SQL Server. An attacker has a packet sniffer and is able to capture that logon sequence. If, at a later point, that attacker could put that traffic back on the network and it work, that would be a replay attack. The classic example given is an attacker captures a bank transaction for some amount of money. Let's say you pay Mr. Attacker US$500 for services rendered. If the attacker can capture the network traffic and replay it multiple times, the bank will deduct US$500 from your account each time and deposit it into his. To the bank, the repeated transactions looked legitimate (although admittedly, with everyone worried about fraud nowadays, we would hope this kind of thing gets flagged and checked out). If this is the case, then the protocol for that transaction we're using is to blame because it provided us no protection from such an attack. Such is the case with NTLM. It provides no protection. Kerberos, on the other hand, includes a time stamp of when the network traffic was sent. If you're outside the window of the acceptable time range (by default this is 5 minutes), Kerberos rejects that network traffic. So in the case above, imagine if the bank put a timestamp on the transaction and had an acceptable time range within 10 seconds. If Mr. Attacker tried to replay the transaction after that 10 second window was up, the bank would know something was going on.

The second drawback with NTLM is that the server isn't verified. The client connects to MySQLServer. Or at least, it thinks it is connecting to MySQLServer. The NTLM protocol may have the ability to validate that Domain\User is connecting, but it doesn't allow Domain\User to verify that he or she is really talking to MySQLServer. This is where the Service Principal Name (SPN) comes into play. When the client attempts to connect via Kerberos, the SPN for the service being connected to is checked. In a Windows 2000 or higher domain, the SPN is stored within Active Directory, and the Active Directory domain controller is trusted by the client. Therefore, if the service, such as a SQL Server service, checks out based on the SPN the client finds for that service within Active Directory, it knows that it can trust the server is truly MySQLServer.

The third drawback is the amount of authentication traffic used by NTLM versus Kerberos. In NTLM, every time authentication happens, a check has to be made back to a domain controller (DC). With Kerberos, tickets are issued to both the client and the server containing the information each needs to validate the other. Therefore, the client and the server only have to check in with a domain controller once during the lifespan of those tickets (default is 600 minutes or 10 hours) to get the tickets in the first place. After that, they both have the information they need without checking back with a DC.

The final drawback is one we've already discussed, and that is situations where we want to make multiple hopes. Quite frankly, NTLM leaves us with no options. We have to make each hop different from the previous one, whether we like it or not. Kerberos delegation ensures we can pass the credentials through all the hops until we reach the final destination.

What Is an SPN, Why Do I Need to Configure It, and How Do I Do So?

A Service Principal Name (SPN) provides the information to the client about the service. Basically, each SPN consists of 3 or 4 pieces of information:

  • The type of service (for SQL Server it is called MSSQLSvc)
  • The name of the server
  • The port (if this needs to be specified)
  • The service account running the service.

All of these need to match up for the client to be able to validate the service. If any of these are wrong, Kerberos authentication won't happen. In some cases, we'll get that SSPI context error and in fact, SSPI won't even drop back to using NTLM, meaning we don't connect at all. Therefore, the key is to get everything correct when we set the SPN.

In order to set an SPN, you must either be a Domain Admin level user or you must be the computer System account (or an account that talks on the network as the System account, such as the Network Service account). Typically, we advise that SQL Server should be run as a local or domain user, so that rules out the second case. We also advise that SQL Server shouldn't be a domain admin level account, and that rules out the first case. What this means is a domain admin level account will need to set the SPN manually. Thankfully, Microsoft provides a nice utility called SETSPN in the Support Tools on the OS CD/DVD to do so. It can also be downloaded from the Microsoft site.

Using SETSPN

SETSPN has three flags we're interested in:

  • -L : This lists the SPNs for a particular account
  • -A : This adds a new SPN
  • -D : This deletes an existing SPN

The key to understanding SPNs is to realize they are tied to an account, whether that be a user or computer account. If we want to see what SPNs are listed for a particular account, here is the syntax:

SETSPN -L <Account>

For instance, if I have a server called MyWebServer, I can list the SPNs assigned to that computer account by:

SETSPN -L MyWebServer

If, instead, I am running my SQL Server under the MyDomain\MyServiceAccount user account, I can check the SPNs listed for that account by:

SETSPN -L MyDomain\MyServiceAccount

To add an SPN, it's important that we know the service account SQL Server is running under. Also, it is important to know the TCP port SQL Server is listening on. If it's a default instance, the port by default is 1433, although this can be changed. If it's a named instance, unless we have gone in and manually set a static port, SQL Server could change the port at any time. Therefore, it's important to set a port statically. I've described how to do so in the a blog post. Once we have those bits of information, we can add an SPN via the following syntax:

SETSPN -A MSSQLSvc/<SQL Server Name>:<port> <account>

If we're dealing with a default instance listening on port 1433, we can leave off the :<port> (but it is still a good idea to have an entry both with and without the port). One other thing to remember is it is important to specify SPNs for both the NetBIOS name (e.g. MySQLServer) as well as the fully qualified domain name (e.g. MySQLServer.mydomain.com). So applying this to a default instance on MyDBServer.mydomain.com running under the service account MyDomain\SQLServerService, we'd execute the following commands:

SETSPN -A MSSQLSvc/MyDBServer MyDomain\SQLServerService
SETSPN -A MSSQLSvc/MyDBServer:1433 MyDomain\SQLServerService
SETSPN -A MSSQLSvc/MyDBServer.mydomain.com MyDomain\SQLServerService
SETSPN -A MSSQLSvc/MyDBServer.mydomain.com:1433 MyDomain\SQLServerService

For a named instance, we typically only require two commands, because there isn't a case where a client is just connecting to the name of the server. For instance, let's assume we have a named instance called Instance2 listening on port 4444 on that same server using that same service account. In that case we'd execute the following commands:

SETSPN -A MSSQLSvc/MyDBServer:4444 MyDomain\SQLServerService
SETSPN -A MSSQLSvc/MyDBServer.mydomain.com:4444 MyDomain\SQLServerService

And in those rare cases where we need to delete an SPN (for instance, we change the service account or switch ports), we can use the -D switch. It's syntax is parallel to the -A switch:

SETSPN -D MSSQLSvc/<SQL Server Name>:<port> <account>

I've Done All of That. How Can I Verify Logins Are Connecting Via Kerberos?

Within SQL Server there is a very simple query we can execute to determine what type of authentication was performed on each connection. Here's the query:

SELECT  
    
s.session_id 
  
c.connect_time 
  
s.login_time 
  
s.login_name 
  
c.protocol_type 
  
c.auth_scheme 
  
s.HOST_NAME 
  
s.program_name 
FROM sys.dm_exec_sessions s 
  
JOIN sys.dm_exec_connections c 
    
ON s.session_id c.session_id

The query returns a lot of information to help you identify the connections. The connect_time and login_time should be pretty close together and it gives you a window of when the initial connection was made. The login_name, along with host_name and program_name, help you identify the exact login. From there the protocol_type helps you narrow down the connection if you have different endpoints for your SQL Server other than just TSQL (for instance, mirroring or HTTP). And finally, the auth_scheme will reveal, for a Windows account, what security protocol was used. If Kerberos authentication was successful, you should see the auth_scheme reflect Kerberos instead of NTLM.

posted by LifeisSimple
2012. 5. 2. 22:01 Brain Trainning/DataBase


PAGEIOLATCH 리소스 조회 쿼리입니다.

Wait type을 변경하면 다른 리소스에 대한 것들도 조회가 가능합니다.

/*

           resource_description 컬럼

                     database_id:file_id:page_id 로구분되어짐.

*/

 

-- 현재PAGELATCH Wait를구함

select session_id, wait_type, resource_description

from sys.dm_os_waiting_tasks

where wait_type like 'PAGELATCH%'

 

-- PAGELATCHResource 상세정보를구함

select wt.session_id, wt.wait_type, wt.wait_duration_ms

, s.name as schema_name

, o.name as object_name

, i.name as index_name

from sys.dm_os_buffer_descriptors bd

join (

           select *

           , CHARINDEX(':', resource_description) as file_index

           , CHARINDEX(':', resource_description, charindex(':', resource_description)) as page_index

           , resource_description as rd

           from sys.dm_os_waiting_tasks wt

           where wait_type like 'PAGELATCH%'

           ) as wt

on bd.database_id = SUBSTRING(wt.rd, 0, wt.file_index) and bd.file_id = SUBSTRING(wt.rd, wt.file_index, wt.page_index)

and bd.page_id = SUBSTRING(wt.rd, wt.page_index, len(wt.rd))

join sys.allocation_units au on bd.allocation_unit_id = au.allocation_unit_id

join sys.partitions p on au.container_id = p.partition_id

join sys.indexes i on p.index_id = i.index_id and p.object_id = i.object_id

join sys.objects o on i.object_id = o.object_id

join sys.schemas s on o.schema_id = s.schema_id

posted by LifeisSimple
2012. 4. 5. 19:19 Brain Trainning/DataBase

대기 상태에 대한 글입니다. 

간단히 읽어보면 도움이 많이 됩니다. 


출처 : http://www.sqlskills.com/BLOGS/PAUL/post/Wait-statistics-or-please-tell-me-where-it-hurts.aspx

(Be sure to join our community to get our monthly newsletter with exclusive content, advance notice of classes with discount codes, and other SQL Server goodies!)  

How many times have you walked up to a SQL Server that has a performance problem and wondered where to start looking? 

One of the most under-utilized performance troubleshooting methodologies in the SQL Server world is one called "waits and queues" (also known simply as "wait stats"). The basic premise is that SQL Server is permanently tracking why execution threads have to wait. You can ask SQL Server for this information and then use the results to narrow down where to start digging to unearth the cause of performance issues. The "waits" are what SQL Server tracks. The "queues" are the resources that the threads are waiting for. There are a myriad of waits in the system and they all indicate different resources being waited for. For example, a PAGEIOLATCH_EX wait means a thread is waiting for a data page to be read into the buffer pool from disk. A LCK_M_X wait means a thread is waiting to be granted an exclusive lock on something.

The great thing about all of this is the SQL Server *knows* where the performance issues are, and you just need to ask it.... and then interpret what it tells you, which can be a little tricky.

Now - where people sometimes get hung up is trying to track down every last wait and figure out what's causing it. Waits *always* occur. It's the way SQL Server's scheduling system works.

A thread is using the CPU (called RUNNING) until it needs to wait for a resource. It then moves to an unordered list of threads that areSUSPENDED. In the meantime, the next thread on the FIFO (first-in-first-out) queue of threads waiting for the CPU (called being RUNNABLE) is given the CPU and becomes RUNNING. If a thread on the SUSPENDED list is notified that it's resource is available, it becomes RUNNABLE and is put on the bottom of the RUNNABLE queue. Threads continue this clockwise movement from RUNNING to SUSPENDED to RUNNABLE to RUNNING again until the task is completed. You can see processes in these states using the sys.dm_exec_requests DMV. 

SQL Server keeps track of the time that elapses between leaving the RUNNING state and becoming RUNNING again (called the "wait time") and the time spent on the RUNNABLE queue (called the "signal wait time" - i.e. how long does the thread need to wait for the CPU after being signaled that its resource is available). We need to work out the time spent waiting on the SUSPENDED list (called the "resource wait time") by subtracting the signal wait time from the overall wait time.

Tom Davidson of Microsoft wrote a fantastic and very accessible whitepaper on "waits and queues" which I encourage you to read:Performance Tuning Using Waits and Queues. My good friend Joe Sack (blog|twitter) who runs the MCM program also published an excellent slide deck on the subject that you can download here. And of course Books Online has a section on the sys.dm_os_wait_stats DMVthat gives info on some of the newer wait types. PSS is putting together a repository of info on all the wait types but not much progress has been made. And there's a free video+demo recording as part of the MCM online training we recorded for Microsoft - see here.

You can ask SQL Server for the cumulative wait statistics using the sys.dm_os_wait_stats DMV, and many people prefer to wrap the DMV call in some aggregation code. I use code based on a query that I got from fellow-MVP Glenn Berry (blog|twitter) and then modified quite a bit. See below for the version updated to take account of the results discussed below:

WITH Waits AS
    (SELECT
        wait_type,
        wait_time_ms / 1000.0 AS WaitS,
        (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
        signal_wait_time_ms / 1000.0 AS SignalS,
        waiting_tasks_count AS WaitCount,
        100.0 * wait_time_ms / SUM (wait_time_ms) OVER() AS Percentage,
        ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS RowNum
    FROM sys.dm_os_wait_stats
    WHERE wait_type NOT IN (
        'CLR_SEMAPHORE', 'LAZYWRITER_SLEEP', 'RESOURCE_QUEUE', 'SLEEP_TASK',
        'SLEEP_SYSTEMTASK', 'SQLTRACE_BUFFER_FLUSH', 'WAITFOR', 'LOGMGR_QUEUE',
        'CHECKPOINT_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BROKER_TO_FLUSH',
        'BROKER_TASK_STOP', 'CLR_MANUAL_EVENT', 'CLR_AUTO_EVENT', 'DISPATCHER_QUEUE_SEMAPHORE',
        'FT_IFTS_SCHEDULER_IDLE_WAIT', 'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'BROKER_EVENTHANDLER',
        'TRACEWRITE', 'FT_IFTSHC_MUTEX', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
        'BROKER_RECEIVE_WAITFOR', 'ONDEMAND_TASK_QUEUE', 'DBMIRROR_EVENTS_QUEUE',
        'DBMIRRORING_CMD', 'BROKER_TRANSMITTER', 'SQLTRACE_WAIT_ENTRIES',
        'SLEEP_BPOOL_FLUSH', 'SQLTRACE_LOCK')
    )
SELECT
    W1.wait_type AS WaitType, 
    CAST (W1.WaitS AS DECIMAL(14, 2)) AS Wait_S,
    CAST (W1.ResourceS AS DECIMAL(14, 2)) AS Resource_S,
    CAST (W1.SignalS AS DECIMAL(14, 2)) AS Signal_S,
    W1.WaitCount AS WaitCount,
    CAST (W1.Percentage AS DECIMAL(4, 2)) AS Percentage,
    CAST ((W1.WaitS / W1.WaitCount) AS DECIMAL (14, 4)) AS AvgWait_S,
    CAST ((W1.ResourceS / W1.WaitCount) AS DECIMAL (14, 4)) AS AvgRes_S,
    CAST ((W1.SignalS / W1.WaitCount) AS DECIMAL (14, 4)) AS AvgSig_S
FROM Waits AS W1
    INNER JOIN Waits AS W2 ON W2.RowNum <= W1.RowNum
GROUP BY W1.RowNum, W1.wait_type, W1.WaitS, W1.ResourceS, W1.SignalS, W1.WaitCount, W1.Percentage
HAVING SUM (W2.Percentage) - W1.Percentage < 95; -- percentage threshold
GO

This will show the waits grouped together as a percentage of all waits on the system, in decreasing order. The waits to be concerned about (potentially) are those at the top of the list as this represents the majority of where SQL Server is spending it's time waiting. You can see that a bunch of waits are being filtered out of consideration - as I said above, waits happen all the time and these are the benign ones we can usually ignore.

You can also reset the aggregated statistics using this code:

DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR);

And of course you can very easily come up with a way to persist the results every few hours or every day and do some time-series analysis to figure out trends or automatically spot problems as they start to happen. You can also use Performance Dashboard to see these graphically in 2005 and Data Collector in 2008. On SQL Server 2000 you can use DBCC SQLPERF ('waitstats').

Once you get the results, you then start figuring out how to interpret them and where to go looking. The whitepaper I referenced above has a ton of good info on most of the wait types (except those added in 2008). There are various ways you can dig in deeper to this information that I'll go into in later posts.

I'm going to start blogging about wait stats analysis, either as standalone posts or as part of other things - and I've already done so in the last post (at time of writing this) in my benchmarking series.

For now, I want to report on the results of the wait stats survey I posted a couple of months back. I asked people to run the code above and let me know the results. I received results for a whopping 1823 SQL Servers out there - thank you!

Here's a graphical view of the results:

 

I'm not surprised at all by the top four results as I see these over and over on client systems.

For the remainder of this post, I'm going to list all the top wait types reported by survey respondents, in descending order, and give a few words about what they might mean if they are the most prevalent wait on your system. The list format shows the number of systems with that wait type as the most prevalent, and then the wait type.

  • 505: CXPACKET
    • This is commonly where a query is parallelized and the parallel threads are not given equal amounts of work to do, or one thread blocks. One thread may have a lot more to do than the others, and so the whole query is blocked while the long-running thread completes. If this is combined with a high number of PAGEIOLATCH_XX waits, it could be large parallel table scans going on because of incorrect non-clustered indexes, or out-of-date statistics causing a bad query plan. If neither of these are the issue, you might want to try setting MAXDOP to 4, 2, or 1 for the offending queries (or possibly the whole instance). Make sure that if you have a NUMA system that you try setting MAXDOP to the number of cores in a single NUMA node first to see if that helps the problem. You also need to consider the MAXDOP effect on a mixed-load system.
  • 304: PAGEIOLATCH_XX
    • This is where SQL Server is waiting for a data page to be read from disk into memory. It commonly indicates a bottleneck at the IO subsystem level, but could also indicate buffer pool pressure (i.e. not enough memory for the workload).
  • 275: ASYNC_NETWORK_IO
    • This is commonly where SQL Server is waiting for a client to finish consuming data. It could be that the client has asked for a very large amount of data or just that it's consuming it reeeeeally slowly because of poor programming.
  • 112: WRITELOG
    • This is the log management system waiting for a log flush to disk. It commonly indicates a problem with the IO subsystem where the log is, but on very high-volume systems it could also be caused by waiting for the LOGCACHE_ACCESS spinlock (which you can't do anything about). To be sure it's the IO subsystem, use the DMV sys.dm_io_virtual_file_stats to examine the IO latency for the log file.
  • 109: BROKER_RECEIVE_WAITFOR
    • This is just Service Broker waiting around for new messages to receive. I would add this to the list of waits to filter out and re-run the wait stats query. 
  • 086: MSQL_XP
    • This is SQL Server waiting for an extended stored-proc to finish. This could indicate a problem in your XP code. 
  • 074: OLEDB
    • As its name suggests, this is a wait for something communicating using OLEDB - e.g. a linked server. It could be that the linked server has a performance issue.
  • 054: BACKUPIO
    • This shows up when you're backing up directly to tape, which is slooooow. I'd be tempted to filter this out.
  • 041: LCK_M_XX
    • This is simply the thread waiting for a lock to be granted and indicates blocking problems. These could be caused by unwanted lock escalation or bad programming, but could also be from IOs taking a long time causing locks to be held for longer than usual. Look at the resource associated with the lock using the DMV sys.dm_os_waiting_tasks.
  • 032: ONDEMAND_TASK_QUEUE
    • This is normal and is part of the background task system (e.g. deferred drop, ghost cleanup).  I would add this to the list of waits to filter out and re-run the wait stats query.
  • 031: BACKUPBUFFER
    • This shows up when you're backing up directly to tape, which is slooooow. I'd be tempted to filter this out.
  • 027: IO_COMPLETION
    • This is SQL Server waiting for IOs to complete and is a sure indication of IO subsystem problems.
  • 024: SOS_SCHEDULER_YIELD
    • If this is a very high percentage of all waits (had to say, but Joe suggests 80%) then this is likely indicative of CPU pressure.
  • 022: DBMIRROR_EVENTS_QUEUE
  • 022: DBMIRRORING_CMD
    •  These two are database mirroring just sitting around waiting for something to do. I would add these to the list of waits to filter out and re-run the wait stats query.
  • 018: PAGELATCH_XX
    • This is contention for access to in-memory copies of pages. The most well-known cases of these are the PFS, SGAM, and GAM contention that can occur in tempdb under certain workloads. To find out what page the contention is on, you'll need to use the DMV sys.dm_os_waiting_tasks to figure out what page the latch is for. For tempdb issues, my friend Robert Davis (blog|twitter) has a good post showing how to do this. Another common cause I've seen is an index hot-spot with concurrent inserts into an index with an identity value key.
  • 016: LATCH_XX
    • This is contention for some non-page structure inside SQL Server - so not related to IO or data at all. These can be hard to figure out and you're going to be using the DMV sys.dm_os_latch_stats. More on this in future posts.
  • 013: PREEMPTIVE_OS_PIPEOPS
    • This is SQL Server switching to pre-emptive scheduling mode to call out to Windows for something. These were added for 2008 and haven't been documented yet (anywhere) so I don't know exactly what it means.
  • 013: THREADPOOL
    • This says that there aren't enough worker threads on the system to satisfy demand. You might consider raising the max worker threads setting.
  • 009: BROKER_TRANSMITTER
    • This is just Service Broker waiting around for new messages to send. I would add this to the list of waits to filter out and re-run the wait stats query. 
  • 006: SQLTRACE_WAIT_ENTRIES
    • Part of SQL Trace. I would add this to the list of waits to filter out and re-run the wait stats query.
  • 005: DBMIRROR_DBM_MUTEX
    •  This one is undocumented and is contention for the send buffer that database mirroring shares between all the mirroring sessions on a server. It could indicate that you've got too many mirroring sessions.
  • 005: RESOURCE_SEMAPHORE
    • This is queries waiting for execution memory (the memory used to process the query operators - like a sort). This could be memory pressure or a very high concurrent workload. 
  • 003: PREEMPTIVE_OS_AUTHENTICATIONOPS
  • 003: PREEMPTIVE_OS_GENERICOPS
    • These are SQL Server switching to pre-emptive scheduling mode to call out to Windows for something. These were added for 2008 and haven't been documented yet (anywhere) so I don't know exactly what it means.
  • 003: SLEEP_BPOOL_FLUSH
    • This is normal to see and indicates that checkpoint is throttling itself to avoid overloading the IO subsystem. I would add this to the list of waits to filter out and re-run the wait stats query.
  • 002: MSQL_DQ
    • This is SQL Server waiting for a distributed query to finish. This could indicate a problem with the distributed query, or it could just be normal.
  • 002: RESOURCE_SEMAPHORE_QUERY_COMPILE
    • When there are too many concurrent query compilations going on, SQL Server will throttle them. I don't remember the threshold, but this can indicate excessive recompilation, or maybe single-use plans.
  • 001: DAC_INIT
    • I've never seen this one before and BOL says it's because the dedicated admin connection is initializing. I can't see how this is the most common wait on someone's system... 
  • 001: MSSEARCH
    • This is normal to see for full-text operations.  If this is the highest wait, it could mean your system is spending most of its time doing full-text queries. You might want to consider adding this to the filter list. 
  • 001: PREEMPTIVE_OS_FILEOPS
  • 001: PREEMPTIVE_OS_LIBRARYOPS
  • 001: PREEMPTIVE_OS_LOOKUPACCOUNTSID
  • 001: PREEMPTIVE_OS_QUERYREGISTRY
    • These are SQL Server switching to pre-emptive scheduling mode to call out to Windows for something. These were added for 2008 and haven't been documented yet (anywhere) so I don't know exactly what it means. 
  • 001: SQLTRACE_LOCK
    • Part of SQL Trace. I would add this to the list of waits to filter out and re-run the wait stats query.

I hope you found this interesting! Let me know if there's anything in particular you're interested in seeing or just that you're following along and enjoying the ride!

posted by LifeisSimple
2012. 4. 4. 12:53 Brain Trainning/DataBase

대기 정보 분석을 위한 Best Reference 페이지들 입니다.

Performance_Tuning_Waits_Queues.doc


1. SQL Server Best Practices Article 

 - http://msdn.microsoft.com/en-us/library/cc966413.aspx

2. Performance Tuning With Wait Statistics 

 - http://blogs.msdn.com/joesack/archive/2009/04/22/presentation-deck-for-performance-tuning-with-wait-statistics.aspx

3. Wait statistics, or please tell me where it hurts

 - http://www.sqlskills.com/BLOGS/PAUL/post/Wait-statistics-or-please-tell-me-where-it-hurts.aspx

4. The SQL Server Wait Type Repository ..

 - http://blogs.msdn.com/b/passsql/archive/2009/11/03/the-sql-server-wait-type-repository.aspx

5. Great Resource on SQL Server Wait Types

 - http://sqlserverperformance.wordpress.com/2009/12/21/great-resource-on-sql-server-wait-types

6. Tracking Session and Statement Level Waits

 - http://sqlblog.com/blogs/jonathankehayias/archive/2010/12/30/an-xevent-a-day-30-of-31-tracking-session-and-statement-levelwaits.aspx

7. Wait Stats Introductory References

 - http://blogs.msdn.com/b/jimmymay/archive/2009/04/26/wait-stats-introductory-references.aspx

8. Performance Blog

 - http://sqldoctor.idera.com/tag/wait-stats/



posted by LifeisSimple