블로그 이미지
LifeisSimple

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30

Notice

2014. 10. 15. 17:39 Brain Trainning/DataBase

 

Decrypting MSSQL Database Link Server Passwords

 

출처 : https://www.netspi.com/blog/entryid/221/decrypting-mssql-database-link-server-passwords

 

Extracting cleartext credentials from critical systems is always fun. While MSSQL server hashes local SQL credentials in the database, linked server credentials are stored encrypted. And if MSSQL can decrypt them, so can you using the PowerShell script released along with this blog. From the offensive point of view, this is pretty far into post exploitation as sysadmin privileges are needed on the SQL server and local administrator privileges are needed on the Windows server. From the defensive point of view, this is just another reminder that unnecessary database links, database links with excessive privileges, and the use of SQL server authentication rather than integrated authentication can result in unnecessary risk. This blog should be interesting to database hackers and admins interested in learning more.

Linked Servers

Microsoft SQL Server allows users to create links to external data sources, typically to other MSSQL servers. When these links are created, they can be configured to use the current security context or static SQL server credentials. If SQL server credentials are used, the user account and password are saved to the database encrypted and thus they are stored in a reversible format. A one-way hash cannot be used, because the SQL server has to be able to access the cleartext credentials to authenticate to other servers. So, if the credentials are encrypted and not hashed, there must be a way for the SQL server to decrypt them prior to use. The remainder of this blog will focus on how that happens.

Linked Server Password Storage

MSSQL stores link server information, including the encrypted password, in master.sys.syslnklgns table. Specifically, the encrypted password is stored in the "pwdhash" column (even though it's not a hash). Below is an example:

The master.sys.syslnklgns table cannot be accessed using a normal SQL connection, but rather a Dedicated Administrative Connection (DAC) is needed (more information about DAC at http://technet.microsoft.com/en-us/library/ms178068%28v=sql.105%29.aspx). Sysadmin privileges are needed to start a DAC connection, but as local administrator privileges are needed anyways, that shouldn't be a problem. If local administrators don't have sysadmin privileges you'll just have to impersonate the MSSQL server account or local SYSTEM account. More details on this can be found on Scott's blog at https://www.netspi.com/blog/entryid/133/sql-server-local-authorization-bypass.

MSSQL Encryption

Time to introduce some MSSQL encryption basics. To move ahead, access to the Service Master Key (SMK) is required (more information about SMK at http://technet.microsoft.com/en-us/library/ms189060.aspx). According to microsoft.com "The Service Master Key is the root of the SQL Server encryption hierarchy. It is generated automatically the first time it is needed to encrypt another key." SMK is stored in master.sys.key_encryptions table and it can be identified by the key_id 102. SMK is encrypted using Windows Data Protection API (DPAPI) and there are two versions of it in the database; one encrypted as LocalMachine and the other in the context of CurrentUser (meaning the SQL Server service account here). We'll choose the former to extract the key as LocalMachine encryption uses the Machinekey for encryption and it can be decrypted without impersonating the service account. Below is an example of what that looks like:

Additional entropy is added to strengthen the encryption but the entropy bytes can be found in the registry at HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\[instancename]\Security\Entropy. Once again, local administrator privileges are needed to access the registry key. The entropy is stored in the registry for each MSSQL instance as shown below:

After that (and removing some padding / metadata from the encrypted value) we can decrypt the SMK using DPAPI.

Decrypting Linked Server Passwords

Based on the length of the SMK (or the MSSQL version) we can determine the encryption algorithm: MSSQL 2012 uses AES, earlier versions use 3DES. In additional, the pwdhash value has to be parsed a bit to find the encrypted password. The first answer referring Pro T-SQL Programmer's guide at http://stackoverflow.com/questions/2822592/how-to-get-compatibility-between-c-sharp-and-sql2k8-aes-encryption got me on the right track; even though the byte format didn't seem to match exactly like detailed on the page, it wasn't too hard to find the right bytes to encrypt. So now, using the SMK, it is possible to extract all of the link credentials (when SQL Server account is used, not Windows authentication) in cleartext.

Decrypting Linked Server Passwords with PowerShell - Get-MSSQLLinkPasswords.psm1

To automate the decryption of linked server credentials I wrote a PowerShell script called "Get-MSSQLLinkPasswords.psm1". It can be download from GitHub here:
https://github.com/NetSPI/Powershell-Modules/blob/master/Get-MSSQLLinkPasswords.psm1

The script must be run locally on the MSSQL server (as DPAPI requires access to the local machine key). The user executing the script must also have sysadmin access to all the database instances (for the DAC connection) and local admin privileges on the Windows server (to access the entropy bytes in registry). In addition, if UAC is enabled, the script must be ran as an administrator. Below is a summary of the process used by the script.

  1. Identify all of the MSSQL instances on the server.
  2. Attempt to create a DAC connection to each instance.
  3. Select the encrypted linked server credentials from the "pwdhash" column of the "mas-ter.sys.syslnklgns" table for each instance.
  4. Select the encrypted Service Master Key (SMK) from the "master.sys.key_encryptions" table of each instance where the "key_id" column is equal to 102. Select the version that has been encrypted as LocalMachine based on the "thumbprint" column.
  5. Extract the entropy value from the registry location HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\[instancename]\Security\Entropy.
  6. Use the information to decrypt the SMK.
  7. The script determines the encryption algorithm (AES or 3DES) used to encrypt the SMK based on SQL Server version and SMK key length.
  8. Use the SMK to decrypt the linked server credentials.
  9. If successful, the script displays the cleartext linked server credentials. Below is an example of the end result:

I've tested the script with MSSQL 2005, 2008, 2012, 2008 Express, and 2012 Express. There might be some bugs, but it appears to work reliably. Please let me know if you notice any errors or if I did not account for certain situations etc.

'Brain Trainning > DataBase' 카테고리의 다른 글

[MSSQL] 버전 별 페이징 쿼리  (0) 2016.04.06
[MSSQL] DB 백업사이즈  (0) 2016.04.04
[MSSQL] 2012 버전별 기능  (0) 2013.09.23
[MSSQL] 2000 -> 2005 시스템뷰 매핑  (0) 2013.08.05
[MSSQL] Database 서버 전체 복구  (0) 2013.02.20
posted by LifeisSimple
2013. 9. 23. 10:31 Brain Trainning/DataBase

 

출처 : http://msdn.microsoft.com/en-us/library/cc645993(v=SQL.110).aspx

Features Supported by the Editions of SQL Server 2012

SQL Server 2012
171 out of 203 rated this helpful

This topic provides details of features supported by the different editions of SQL Server 2012.

Note Note

SQL Server is available in an Evaluation edition for a 180-day trial period. For more information, see the SQL Server Trial Software Web Site.

Note Note

For features supported by Evaluation and Developer editions see the SQL Server Enterprise feature set.

To navigate to the table for a SQL Server technology, click on its link:

Cross-Box Scale Limits

High Availability

Scalability and Performance

Security

Replication

Management Tools

RDBMS Manageability

Development Tools

Programmability

Integration Services

Integration Services-Advanced Adapters

Integration Services-Advanced Transforms

Master Data Services

Data Warehouse

Analysis Services

BI Semantic Model (Multidimensional)

BI Semantic Model (Tabular)

PowerPivot for SharePoint

Data Mining

Reporting Services

Business Intelligence Clients

Spatial and Location Services

Additional Database Services

Other Components

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Maximum Compute Capacity Used by a Single Instance (SQL Server Database Engine)1

Operating System maximum

Limited to lesser of 4 Sockets or 16 cores

Limited to lesser of 4 Sockets or 16 cores

Limited to lesser of 4 Sockets or 16 cores

Limited to lesser of 1 Socket or 4 cores

Limited to lesser of 1 Socket or 4 cores

Limited to lesser of 1 Socket or 4 cores

Maximum Compute Capacity Used by a Single Instance (Analysis Services, Reporting Services) 1

Operating system maximum

Operating system maximum

Limited to lesser of 4 Sockets or 16 cores

Limited to lesser of 4 Sockets or 16 cores

Limited to lesser of 1 Socket or 4 cores

Limited to lesser of 1 Socket or 4 cores

Limited to lesser of 1 Socket or 4 cores

Maximum memory utilized (per instance of SQL Server Database Engine)

Operating system maximum

64 GB

64 GB

64 GB

1 GB

1 GB

1 GB

Maximum memory utilized (per instance of Analysis Services)

Operating system maximum

Operating system maximum

64 GB

N/A

N/A

N/A

N/A

Maximum memory utilized (per instance of Reporting Services)

Operating system maximum

Operating system maximum

64 GB

64 GB

4 GB

N/A

N/A

Maximum relational Database size

524 PB

524 PB

524 PB

524 PB

10 GB

10 GB

10 GB

1Enterprise Edition with Server + Client Access License (CAL) based licensing is limited to a maximum of 20 cores per SQL Server instance. For more information, see Compute Capacity Limits by Edition of SQL Server.

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Server Core support1

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Log Shipping

Yes

Yes

Yes

Yes

 

 

 

Database mirroring

Yes

Yes (Safety Full Only)

Yes (Safety Full Only)

Witness only

Witness only

Witness only

Witness only

AlwaysOn Failover Cluster Instances

Yes (Node support: Operating system maximum

Yes (Node support: 2)

Yes (Node support: 2)

 

 

 

 

Backup compression

Yes

Yes

Yes

 

 

 

 

Database snapshot

Yes

 

 

 

 

 

 

AlwaysOn Availability Groups

Yes

 

 

 

 

 

 

Connection Director

Yes

 

 

 

 

 

 

Online page and file restore

Yes

 

 

 

 

 

 

Online indexing

Yes

 

 

 

 

 

 

Online schema change

Yes

 

 

 

 

 

 

Fast recovery

Yes

 

 

 

 

 

 

Mirrored backups

Yes

 

 

 

 

 

 

Hot Add Memory and CPU2

Yes

 

 

 

 

 

 

Database Recovery Advisor

Yes

Yes

Yes

Yes

Yes

Yes

Yes

1For more information on installing SQL Server 2012 on Server Core, see Install SQL Server 2012 on Server Core.

2This feature is only available for 64-bit SQL Server.

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Multi-instance support

50

50

50

50

50

50

50

Table and index partitioning

Yes

 

 

 

 

 

 

Data compression

Yes

 

 

 

 

 

 

Resource Governor

Yes

 

 

 

 

 

 

Partition Table Parallelism

Yes

 

 

 

 

 

 

Multiple Filestream containers

Yes

 

 

 

 

 

 

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Basic Auditing

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Fine Grained Auditing

Yes

 

 

 

 

 

 

Transparent database encryption

Yes

 

 

 

 

 

 

Extensible Key Management

Yes

 

 

 

 

 

 

User-Defined Roles

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Contained Databases

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

SQL Server change tracking

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Merge replication

Yes

Yes

Yes

Yes (Subscriber only)

Yes (Subscriber only)

Yes (Subscriber only)

Yes (Subscriber only)

Transactional replication

Yes

Yes

Yes

Yes (Subscriber only)

Yes (Subscriber only)

Yes (Subscriber only)

Yes (Subscriber only)

Snapshot replication

Yes

Yes

Yes

Yes (Subscriber only

Yes (Subscriber only)

Yes (Subscriber only)

Yes (Subscriber only)

Heterogeneous subscribers

Yes

Yes

Yes

 

 

 

 

Oracle publishing

Yes

 

 

 

 

 

 

Peer to Peer transactional replication

Yes

 

 

 

 

 

 

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

SQL Management Objects (SMO)

Yes

Yes

Yes

Yes

Yes

Yes

Yes

SQL Configuration Manager

Yes

Yes

Yes

Yes

Yes

Yes

Yes

SQL CMD (Command Prompt tool)

Yes

Yes

Yes

Yes

Yes

Yes

Yes

SQL Server Management Studio

Yes

Yes

Yes

Yes

Yes

Yes

 

Distributed Replay – Admin Tool

Yes

Yes

Yes

Yes

Yes

Yes

 

Distributed Replay - Client

Yes

Yes

Yes

Yes

 

 

 

Distributed Replay - Controller

Yes (Enterprise supports up to 16 clients, Developer supports only 1 client)

Yes (1 client support only)

Yes (1 client support only)

Yes (1 client support only)

 

 

 

SQL Profiler

Yes

Yes

Yes

No2

No2

No2

No2

SQL Server Agent

Yes

Yes

Yes

Yes

 

 

 

Microsoft System Center Operations Manager Management Pack

Yes

Yes

Yes

Yes

 

 

 

Database Tuning Advisor (DTA)

Yes

Yes

Yes3

Yes3

 

 

 

2 SQL Server Web, SQL Server Express, SQL Server Express with Tools, and SQL Server Express with Advanced Services can be profiled using SQL Server Standard and SQL Server Enterprise editions.

3 Tuning enabled only on Standard edition features.

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

User Instances

 

 

 

 

Yes

Yes

Yes

LocalDB

 

 

 

 

Yes

Yes

 

Dedicated admin connection

Yes

Yes

Yes

Yes

Yes (under trace flag)

Yes (under trace flag)

Yes (under trace flag)

PowerShell scripting support

Yes

Yes

Yes

Yes

Yes

Yes

Yes

SysPrep support1

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Support for Data-tier application component operations – extract, deploy, upgrade, delete

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Policy automation (check on schedule and change)

Yes

Yes

Yes

Yes

 

 

 

Performance data collector

Yes

Yes

Yes

Yes

 

 

 

Able to enroll as a managed instance in a multi-instance management

Yes

Yes

Yes

Yes

 

 

 

Standard performance reports

Yes

Yes

Yes

Yes

 

 

 

Plan guides and plan freezing for plan guides

Yes

Yes

Yes

Yes

 

 

 

Direct query of indexed views (using NOEXPAND hint)

Yes

Yes

Yes

Yes

 

 

 

Automatic indexed view maintenance

Yes

Yes

Yes

Yes

 

 

 

Distributed partitioned views

Yes

Partial. Distributed Partitioned Views are not updatable

Partial. Distributed Partitioned Views are not updatable

Partial. Distributed Partitioned Views are not updatable

Partial. Distributed Partitioned Views are not updatable

Partial. Distributed Partitioned Views are not updatable

Partial. Distributed Partitioned Views are not updatable

Parallel indexed operations

Yes

 

 

 

 

 

 

Automatic use of indexed view by query optimizer

Yes

 

 

 

 

 

 

Parallel consistency check

Yes

 

 

 

 

 

 

SQL Server Utility control point

Yes

 

 

 

 

 

 

Contained Databases

Yes

Yes

Yes

Yes

Yes

Yes

Yes

1 SysPrep is supported for stand-alone instances of Database Engine and Reporting Services. For more information, see Considerations for Installing SQL Server Using SysPrep.

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Microsoft Visual Studio Integration

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Intellisense (Transact-SQL and MDX)

Yes

Yes

Yes

Yes

Yes

Yes

Yes

SQL Server Data Tools (SSDT)

Yes

Yes

Yes

Yes

Yes

 

 

SQL query edit and design tools1

Yes

Yes

Yes

 

 

 

 

Version control support1

Yes

Yes

Yes

 

 

 

 

MDX edit, debug, and design tools1

Yes

Yes

Yes

 

 

 

 

1 This feature is not available for 64-bit version of Standard edition.

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Common Language Runtime (CLR) Integration

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Native XML support

Yes

Yes

Yes

Yes

Yes

Yes

Yes

XML indexing

Yes

Yes

Yes

Yes

Yes

Yes

Yes

MERGE & UPSERT Capabilities

Yes

Yes

Yes

Yes

Yes

Yes

Yes

FILESTREAM support

Yes

Yes

Yes

Yes

Yes

Yes

Yes

FileTable

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Date and Time datatypes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Internationalization support

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Full-text and semantic search

Yes

Yes

Yes

Yes

Yes

 

 

Specification of language in query

Yes

Yes

Yes

Yes

Yes

 

 

Service Broker (messaging)

Yes

Yes

Yes

No (Client only)

No (Client only)

No (Client only)

No (Client only)

Web services (HTTP/SOAP endpoints)

Yes

Yes

Yes

Yes

 

 

 

TSQL endpoints

Yes

Yes

Yes

Yes

 

 

 

Arrow icon used with Back to Top link Top

Feature

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

SQL Server Import and Export Wizard

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Built-in data source connectors

Yes

Yes

Yes

Yes

Yes

Yes

Yes

SSIS designer and runtime

Yes

Yes

Yes

 

 

 

 

Basic Transforms

Yes

Yes

Yes

 

 

 

 

Basic data profiling tools

Yes

Yes

Yes

 

 

 

 

Change Data Capture Service for Oracle by Attunity

Yes

 

 

 

 

 

Change Data Capture Designer for Oracle by Attunity

Yes

 

 

 

 

 

 

Arrow icon used with Back to Top link Top

Integration Services - Advanced Adapters

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

High performance Oracle destination

Yes

 

 

 

 

 

 

High performance Teradata destination

Yes

 

 

 

 

 

 

SAP BW source and destination

Yes

 

 

 

 

 

 

Data mining model training destination adapter

Yes

 

 

 

 

 

 

Dimension processing destination adapter

Yes

 

 

 

 

 

 

Partition processing destination adapter

Yes

 

 

 

 

 

 

Change Data Capture components by Attunity

Yes

 

 

 

 

 

 

Connector for Open Database Connectivity (ODBC) by Attunity

Yes

 

 

 

 

 

 

Arrow icon used with Back to Top link Top

Integration Services - Advanced Transforms

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Persistent (high performance) lookups

Yes

 

 

 

 

 

 

Data mining query transformation

Yes

 

 

 

 

 

 

Fuzzy grouping and lookup transformations

Yes

 

 

 

 

 

 

Term extractions and lookup transformations

Yes

 

 

 

 

 

 

Arrow icon used with Back to Top link Top

Note Note
  • Master Data Services is available on the 64-bit editions of Business Intelligence and Enterprise only.

Feature

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Master Data Services database

Yes

Yes

 

 

 

 

 

Master Data Manager web application

Yes

Yes

 

 

 

 

 

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Create cubes without a database

Yes

Yes

Yes

 

 

 

 

Auto-generate staging and data warehouse schema

Yes

Yes

Yes

 

 

 

 

Change data capture

Yes

 

 

 

 

 

 

Star join query optimizations

Yes

 

 

 

 

 

 

Scalable read-only Analysis Services configuration

Yes

 

 

 

 

 

 

Parallel query processing on partitioned tables and indices

Yes

 

 

 

 

 

 

xVelocity memory optimized columnstore indexes

Yes

 

 

 

 

 

 

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Scalable Shared Databases (Attach/Detach, Read only databases)

Yes

Yes

 

 

 

 

 

Backup/Restore, Attach/Detach databases

Yes

Yes

Yes

Yes

 

 

 

Synchronize databases

Yes

Yes

 

 

 

 

 

High Availability

Yes

Yes

Yes

 

 

 

 

Programmability (AMO, ADOMD.Net, OLEDB, XML/A, ASSL)

Yes

Yes

Yes

 

 

 

 

Arrow icon used with Back to Top link Top

BI Semantic Model (Multidimensional)

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Semi-additive Measures

Yes

Yes

No1

 

 

 

 

Hierarchies

Yes

Yes

Yes

 

 

 

 

KPIs

Yes

Yes

Yes

 

 

 

 

Perspectives

Yes

Yes

 

 

 

 

 

Actions

Yes

Yes

Yes

 

 

 

 

Account intelligence

Yes

Yes

Yes

 

 

 

 

Time Intelligence

Yes

Yes

Yes

 

 

 

 

Custom rollups

Yes

Yes

Yes

 

 

 

 

Writeback cube

Yes

Yes

Yes

 

 

 

 

Writeback dimensions

Yes

Yes

 

 

 

 

 

Writeback cells

Yes

Yes

Yes

 

 

 

 

Drillthrough

Yes

Yes

Yes

 

 

 

 

Advanced hierarchy types (Parent-Child, Ragged Hiearchies)

Yes

Yes

Yes

 

 

 

 

Advanced dimensions (Reference dimensions, many-to-many dimensions

Yes

Yes

Yes

 

 

 

 

Linked measures and dimensions

Yes

Yes

 

 

 

 

 

Translations

Yes

Yes

Yes

 

 

 

 

Aggregations

Yes

Yes

Yes

 

 

 

 

Multiple Partitions

Yes

Yes

Yes, up to 3

 

 

 

 

Proactive Caching

Yes

Yes

 

 

 

 

 

Custom Assemblies (stored procs)

Yes

Yes

Yes

 

 

 

 

MDX queries and scripts

Yes

Yes

Yes

 

 

 

 

Role-based security model

Yes

Yes

Yes

 

 

 

 

Dimension and Cell-level Security

Yes

Yes

Yes

 

 

 

 

Scalable string storage

Yes

Yes

Yes

 

 

 

 

MOLAP, ROLAP, HOLAP storage modes

Yes

Yes

Yes

 

 

 

 

Binary and compressed XML transport

Yes

Yes

Yes

 

 

 

 

Push-mode processing

Yes

Yes

 

 

 

 

 

Measure Expressions

Yes

Yes

 

 

 

 

 

1The LastChild semi-additive measure is supported in standard edition, but other semi-additive measures, such as None, FirstChild, FirstNonEmpty, LastNonEmpty, AverageOfChildren, and ByAccount, are not. Additive measures, such as Sum, Count, Min, Max, and non-additive measures (DistinctCount) are supported on all editions.

Arrow icon used with Back to Top link Top

BI Semantic Model (Tabular)

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Hierarchies

Yes

Yes

 

 

 

 

 

KPIs

Yes

Yes

 

 

 

 

 

Semi-additive Measures

Yes

Yes

 

 

 

 

 

Perspectives

Yes

Yes

 

 

 

 

 

DAX calculations, DAX queries, MDX queries

Yes

Yes

 

 

 

 

 

Row-level Security

Yes

Yes

 

 

 

 

 

Partitions

Yes

Yes

 

 

 

 

 

In-Memory and DirectQuery storage modes (Tabular only)

Yes

Yes

 

 

 

 

 

Arrow icon used with Back to Top link Top

PowerPivot for SharePoint

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

SharePoint farm integration based on shared service architecture

Yes

Yes

 

 

 

 

 

Usage reporting

Yes

Yes

 

 

 

 

 

Health monitoring rules

Yes

Yes

 

 

 

 

 

PowerPivot Gallery

Yes

Yes

 

 

 

 

 

PowerPivot Data Refresh

Yes

Yes

 

 

 

 

 

PowerPivot Data Feeds

Yes

Yes

 

 

 

 

 

Arrow icon used with Back to Top link Top

Data Mining

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Standard Algorithms

Yes

Yes

Yes

 

 

 

 

Data Mining Tools (Wizards, Editors Query Builders)

Yes

Yes

Yes

 

 

 

 

Cross Validation

Yes

Yes

 

 

 

 

 

Models on Filtered Subsets of Mining Structure Data

Yes

Yes

 

 

 

 

 

Time Series: Custom Blending Between ARTXP and ARIMA Methods

Yes

Yes

 

 

 

 

 

Time Series: Prediction with New Data

Yes

Yes

 

 

 

 

 

Unlimited Concurrent DM Queries

Yes

Yes

 

 

 

 

 

Advanced Configuration & Tuning Options for Data Mining Algorithms

Yes

Yes

 

 

 

 

 

Support for plug-in algorithms

Yes

Yes

 

 

 

 

 

Parallel Model Processing

Yes

Yes

 

 

 

 

 

Time Series: Cross-Series Prediction

Yes

Yes

 

 

 

 

 

Unlimited attributes for Association Rules

Yes

Yes

 

 

 

 

 

Sequence Prediction

Yes

Yes

 

 

 

 

 

Multiple Prediction Targets for Naïve Bayes, Neural Network and Logistic Regression

Yes

Yes

 

 

 

 

 

Arrow icon used with Back to Top link Top

 

Reporting Services Features

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Supported catalog DB SQL Server edition

Standard or higher

Standard or higher

Standard or higher

Web

Express

 

 

Supported data source SQL Server edition

All SQL Server editions

All SQL Server editions

All SQL Server editions

Web

Express

 

 

Report server

Yes

Yes

Yes

Yes

Yes

 

 

Report Designer

Yes

Yes

Yes

Yes

Yes

 

 

Report Manager

Yes

Yes

Yes

Yes

Yes

 

 

Role based security

Yes

Yes

Yes

Yes

Yes

 

 

Word Export and Rich Text Support1

Yes

Yes

Yes

Yes

Yes

 

 

Enhanced gauges and charting

Yes

Yes

Yes

Yes

Yes

 

 

Export to Excel, PDF, Images, and Atom compliant data feeds2

Yes

Yes

Yes

Yes

Yes

 

 

Client-side printing using the RSClientPrint control.

Yes

Yes

Yes

Yes

Yes

 

 

Rendering to interactive HTML.

Yes

Yes

Yes

Yes

Yes

 

 

Custom authentication

Yes

Yes

Yes

Yes

Yes

 

 

Report as data feeds

Yes

Yes

Yes

Yes

Yes

 

 

Model support

Yes

Yes

Yes

Yes

 

 

 

Create custom roles for role-based security

Yes

Yes

Yes

 

 

 

 

Model Item security

Yes

Yes

Yes

 

 

 

 

Infinite click through

Yes

Yes

Yes

 

 

 

 

Shared component library

Yes

Yes

Yes

 

 

 

 

Email and file share subscriptions and scheduling

Yes

Yes

Yes

 

 

 

 

Report history, execution snapshots and caching

Yes

Yes

Yes

 

 

 

 

SharePoint Integration

Yes

Yes

Yes

 

 

 

 

Remote and non-SQL data source support3

Yes

Yes

Yes

 

 

 

 

Data source, delivery and rendering, RDCE extensibility

Yes

Yes

Yes

 

 

 

 

Data driven report subscription

Yes

Yes

 

 

 

 

 

Scale out deployment (Web farms)

Yes

Yes

 

 

 

 

 

Alerting4

Yes

Yes

 

 

 

 

 

Power View4

Yes

Yes

 

 

 

 

 

1Exporting to XML is not supported in SQL Server Express with Advanced Services.

2Exporting to images and CSV is not supported in SQL Server Express with Advanced Services.

3For more information on the supported datasources in SQL Server 2012 Reporting Services (SSRS), see Data Sources Supported by Reporting Services (SSRS).

4Requires Reporting Services in SharePoint mode. For more information, see Installing Reporting Services SharePoint Mode Report Server for Power View and Data Alerting.

Arrow icon used with Back to Top link Top

Report Server Database Server Edition Requirements

When creating a report server database, not all editions of SQL Server can be used to host the database. The following table shows you which editions of the Database Engine you can use for specific editions of Reporting Services.

For this edition of SQL Server Reporting Services

Use this edition of the Database Engine instance to host the database

Enterprise

Standard, Business Intelligence Enterprise, editions (local or remote)

Business Intelligence

Standard, Business Intelligence Enterprise, editions (local or remote)

Standard

Standard, Enterprise editions (local or remote)

Web

Web edition (local only)

Express with Advanced Services

Express with Advanced Services (local only).

Evaluation

Evaluation

The following software client applications are available on the Microsoft Downloads center and are provided to assist you with creating business intelligence documents that run on a SQL Server instance. When you host these documents in a server environment, use an edition of SQL Server that is supported for that document type. The following table identifies which SQL Server edition contains the server features required to host the documents created in these client applications.

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Report Builder

Yes

Yes

Yes

 

 

 

 

Data Mining Addins for Excel and Visio 2010 

Yes

Yes

Yes

 

 

 

 

PowerPivot for Excel 2010

Yes

Yes

 

 

 

 

 

Master Data Services Add-in for Excel

Yes

Yes

 

 

 

 

 

Note Note
  1. PowerPivot for Excel is an Excel addin and does not depend on SQL Server. However PowerPivot for SharePoint is required for sharing and collaborating with PowerPivot workbooks in SharePoint and this capability is available as part of SQL Server Enterprise and Business Intelligence editions.

  2. The above table identifies the SQL Server editions that are required to enable these client tools; however these features can access data hosted on any edition of SQL Server.

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Spatial indexes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Planar and Geodetic datatypes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Advanced spatial libraries

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Import/export of industry-standard spatial data formats

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

SQL Server Migration Assistant

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Database mail

Yes

Yes

Yes

Yes

 

 

 

Arrow icon used with Back to Top link Top

Feature Name

Enterprise

Business Intelligence

Standard

Web

Express with Advanced Services

Express with Tools

Express

Data Quality Services

Yes

Yes

 

 

 

 

 

StreamInsight

StreamInsight Premium Edition

StreamInsight Standard Edition

StreamInsight Standard Edition

StreamInsight Standard Edition

 

 

 

StreamInsight HA

StreamInsight Premium Edition

 

 

 

 

 

 

Arrow icon used with Back to Top link Top

'Brain Trainning > DataBase' 카테고리의 다른 글

[MSSQL] DB 백업사이즈  (0) 2016.04.04
[MSSQL] Linked Server 패스워드 확인  (0) 2014.10.15
[MSSQL] 2000 -> 2005 시스템뷰 매핑  (0) 2013.08.05
[MSSQL] Database 서버 전체 복구  (0) 2013.02.20
[MSSQL] sysfiles ...  (0) 2013.02.05
posted by LifeisSimple
2013. 8. 5. 11:30 Brain Trainning/DataBase

 

출처 : http://msdn.microsoft.com/ko-kr/library/ms187997(v=sql.90).aspx

SQL Server 2000 시스템 테이블을 SQL Server 2005 시스템 뷰로 매핑

SQL Server 2005
이 항목은 아직 평가되지 않았습니다.- 이 항목 평가

 

이 항목에서는 SQL Server 2000 시스템 테이블 및 함수와 SQL Server 2005 시스템 뷰 및 함수 간의 매핑을 보여 줍니다.

다음 표에서는 SQL Server 2000의 master 데이터베이스에 있는 시스템 테이블을 SQL Server 2005의 해당 시스템 뷰 또는 함수로 매핑합니다.

시스템 테이블 시스템 뷰 또는 함수 뷰 또는 함수 유형

sysaltfiles

sys.master_files

카탈로그 뷰

syscacheobjects

sys.dm_exec_cached_plans

sys.dm_exec_plan_attributes

sys.dm_exec_sql_text

sys.dm_exec_cached_plan_dependent_objects

동적 관리 뷰

동적 관리 뷰

동적 관리 뷰

동적 관리 뷰

syscharsets

sys.syscharsets

호환성 뷰

sysconfigures

sys.configurations

카탈로그 뷰

syscurconfigs

sys.configurations

카탈로그 뷰

sysdatabases

sys.databases

카탈로그 뷰

sysdevices

sys.backup_devices

카탈로그 뷰

syslanguages

sys.syslanguages

호환성 뷰

syslockinfo

sys.dm_tran_locks

동적 관리 뷰

syslocks

sys.dm_tran_locks

동적 관리 뷰

syslogins

sys.server_principals

sys.sql_logins

카탈로그 뷰

sysmessages

sys.messages

카탈로그 뷰

sysoledbusers

sys.linked_logins

카탈로그 뷰

sysopentapes

sys.dm_io_backup_tapes

동적 관리 뷰

sysperfinfo

sys.dm_os_performance_counters

동적 관리 뷰

sysprocesses

sys.dm_exec_connections

sys.dm_exec_sessions

sys.dm_exec_requests

동적 관리 뷰

동적 관리 뷰

동적 관리 뷰

sysremotelogins

sys.remote_logins

카탈로그 뷰

sysservers

sys.servers

카탈로그 뷰

다음 표에서는 SQL Server 2000의 모든 데이터베이스에 있는 시스템 테이블 또는 함수를 SQL Server 2005의 해당 시스템 뷰 또는 함수로 매핑합니다.

시스템 테이블 또는 함수 시스템 뷰 또는 함수 뷰 또는 함수 유형

fn_virtualfilestats

sys.dm_io_virtual_file_stats

동적 관리 뷰

syscolumns

sys.columns

카탈로그 뷰

syscomments

sys.sql_modules

카탈로그 뷰

sysconstraints

sys.check_constraints

sys.default_constraints

sys.key_constraints

sys.foreign_keys

카탈로그 뷰

카탈로그 뷰

카탈로그 뷰

카탈로그 뷰

sysdepends

sys.sql_dependencies

카탈로그 뷰

sysfilegroups

sys.filegroups

카탈로그 뷰

sysfiles

sys.database_files

카탈로그 뷰

sysforeignkeys

sys.foreign_key_columns

카탈로그 뷰

sysindexes

sys.indexes

sys.partitions

sys.allocation_units

sys.dm_db_partition_stats

카탈로그 뷰

카탈로그 뷰

카탈로그 뷰

동적 관리 뷰

sysindexkeys

sys.index_columns

카탈로그 뷰

sysmembers

sys.database_role_members

카탈로그 뷰

sysobjects

sys.objects

카탈로그 뷰

syspermissions

sys.database_permissions

sys.server_permissions

카탈로그 뷰

카탈로그 뷰

sysprotects

sys.database_permissions

sys.server_permissions

카탈로그 뷰

카탈로그 뷰

sysreferences

sys.foreign_keys

카탈로그 뷰

systypes

sys.types

카탈로그 뷰

sysusers

sys.database_principals

카탈로그 뷰

sysfulltextcatalogs

sys.fulltext_catalogs

카탈로그 뷰

posted by LifeisSimple
2013. 7. 10. 15:51 Brain Trainning/Server

Windows 8 or 2012에서

Avatar of John Marlin [MSFT]

John Marlin [MSFT]

Microsoft Corporation

MSFT

4,399 Points 3 2 0
Recent Achievements
Blog Conversation Starter Blogger II New Blog Rater
  • Comments 30

 

With each new release of Windows, it's a challenge to balance keeping the OS footprint small with maintaining forward compatibility for applications and devices. The .Net Framework is at the heart of this challenge. In Windows 8, Microsoft decided to take a new approach to the way that such features are installed. In this post, I will describe the new approach, how the changes might affect you, and how you can prepare for them to ensure a smooth transition to Windows 8.

 

About the .Net Framework

The .Net Framework provides a class library and a Common Language Runtime (CLR) that allow software developers to create rich, secure applications. Additionally, the Framework provides the functionality required for such applications to run. A program developed using a particular version of the Framework typically requires that version, or a compatible one, to be installed on computers where it will run.

Windows 7 and Windows Server 2008 R2 included .Net Framework 3.5. Additionally many applications have been written using .Net 4.0, so that version is often installed using a redistributable package from Microsoft. Windows 8 and Windows Server 2012 include .Net 4.5, which supports building and running the next generation of applications and web services, including Metro-style apps. .Net 4.5 supports applications written for 4.0, so there is no need to install .Net 4.0 on Windows 8.

Features on Demand (FoD)

"Features on Demand (FoD)" is a new concept in Windows 8 that allows administrators and image builders to reduce the amount of space used by the component store by adding only the payload for optional components they need to a system image. "Payload" refers to the binaries and other files associated with a feature. Features on Demand also allows for the addition of roles and features to an image at any time they are needed .

In Windows 8,.Net Framework 3.5 is now a Feature on Demand. And to simplify the installation of common legacy versions of the .Net Framework, .Net 3.0 and 2.0 have been included in the same feature package as 3.5. That means if any of those three versions need to be installed, all the administrator needs to do is enable the single .Net Framework 3.5 feature in Windows 8.

While we encourage developers to create or upgrade their applications using .Net 4.5, we realize that many commonly-used apps exist that depend on older versions of .Net, and that it takes time for developers to upgrade their code and for customers to upgrade to the new apps. For these reasons we have provided a variety of methods by which customers can enable the legacy versions of .Net in Windows 8.

Installation Sources

The .Net Framework 3.5 payload can be obtained from any of the following sources:

· Windows Update (WU)

· A Windows Image file (.wim) to which the payload has been added

· The \sources\sxs folder on the installation media

There are unique advantages to using each. The source can be specified for the environment using a new Group Policy setting. It can also be specified when installing .Net 3.5 manually on an individual machine or image.

The simplest scenario is one in which WU is accessible to both the machine and the user, and the machine is not configured to obtain updates from Windows Server Update Services (WSUS). In this case, when the feature is enabled, the user will be prompted for permission to download the update. If permitted, Windows will download the payload directly from Windows Update and install the feature. Done!

In more controlled environments, administrators might want to redirect such download requests to an alternate source such as a Windows Image file (.wim) to which the payload was added, or the\sources\sxs folder from the installation media. There might also be network , proxy, or security configurations that prevent users from directly accessing Windows Update. Additionally, WSUS does not currently support the payloads for Features on Demand, although it does support the subsequent patching of the features. So in environments where machines are configured to obtain updates from WSUS, administrators will need to configure the source for initial FoD installations.

To allow administrators to manage these scenarios, a new Group Policy setting was introduced in Windows 8 / Windows Server 2012: "Specify settings for optional component installation and component repair”, located under Computer Configuration\Administrative Templates\System:

image

This policy allows the administrator to configure the installation of Features on Demand and feature store repair operations to use only authorized locations.

When this policy is enabled, a network location (for example, a file server) can be specified for both repair of the feature store, and enabling features whose payloads were not originally added. The Alternate source file path can point to a \sources\sxs folder or a Windows image (WIM) file using the WIM: prefix. The repair WIM can be different than the initial WIM file used for installation. You can specify multiple paths by using ";" between the paths. Valid syntax is "wim:<path to wim>:<index>". Or "<path to sxs folder>".

Examples:

\\server\Win8Media\sources\sxs

wim:\\server\sourcewim\install.wim:3

If you select Never attempt to download payload from Windows Update, WU is not contacted during an installation or repair operation.

If you select Contact Windows Update directly to download repair content instead of Windows Server Update Services (WSUS), attempts to add features (for example, .NET Framework 3.5) or repair the feature file store use Windows Update to download files. Target computers require Internet and WU access for this option. Normal servicing operations continue to use WSUS if it has been configured as a source.

The advantage of a WIM file is that it can be kept current with updates. This may be important because files associated with features that have post-RTM updates will be repaired to their current versions if the specified source contains the current updates. It is important to note however, that the initial installation of FoDs from an updated WIM will still use RTM versions of files, with subsequent updates being applied using standard patching sources such as WU or WSUS. I'm not going to cover how to create and maintain a WIM file here. For more information on how to do that see Deployment Image Servicing and Management (DISM) Technical Reference .

The\sources\sxs folder on the installation media can provide a quick and easy local source for initial installations. In addition to referencing the sxs folder in the policy setting above, you can specify it as the source for manually installation on individual machines. To make the source files available on the internal network, use the XCOPY command to copy the source files to a network share, and then reference the share as a mapped drive or a UNC where applicable:

xcopy d:\sources\sxs\*.* x:\Win8Media\sources\sxs\ /s

Installation Methods

Now that we have covered the various sources, let's talk about how to enable the feature. There are three basic ways .Net 3.5 can be enabled, in order of preference: proactively, manually, or automatically.

By far, the most hassle-free way to install any FoD to more than a very few machines is to add the payload and enable the feature on images you will be deploying in your environment. Couple that with configuring the Group Policy setting mentioned earlier to point to the right source for one-off scenarios, and about the only administrative work you will have beyond that will be maintaining an updated WIM file for feature repair. Again, I am not going into details here about how to create images for broad deployment. For more information on how to do that see Deployment Image Servicing and Management (DISM) Technical Reference.

The simplest way to enable .Net 3.5 on a small number of Windows 8 client machine is through Turn Windows features on or off:

1. On the Start Screen begin typing “turn windows features on or off”, select Settings in the search pane, and click on Turn Windows features on or off.

2. Check the box next to .Net Framework 3.5 (includes .NET 2.0 and 3.0)

3. The wizard will search for required files and then prompt you to download files from Windows Update.

4. Select Download files from Windows Update.

image

5. After the wizard completes, click Finish.

The process on Windows Server 2012 is similar but is accomplished using the Add Roles and Features Wizard:

1. In Server Manager, click Manage, then select Add Roles and Features.

2. Click Next at the Before you begin screen.

3. At the Select installation type screen, select Role-based or feature-based installation and click Next.

4. On the Select destination server screen, select the target server and click Next.

5. On the Select server roles screen, click Next.

6. On the Select features screen, check the box next to .Net Framework 3.5 Features and click Next. If you expand the tree it looks like this:

image

7. On the Confirm installation selections screen, a warning will be displayed asking "Do you need to specify an alternate source path?...". If the target machine does not have access to Windows Update, click the Specify an alternate source path link to specify the path to the \sources\sxs folder and click OK:

image

image

8. After you have specified the alternate source, or if the target machine has access to Windows Update, click Install.

Note that the Add Roles and Features Wizard gives the administrator the option to specify an alternate resource, whereas the Turn Windows features on or off dialog does not. In most environments, administrators will deploy standard images of client machines that include the necessary features, or they will configure Group Policy to specify the installation source for Features on Demand.

Administrators can also use Deployment Image Servicing and Management (DISM) or Powershell cmdlets to enable the feature. DISM can be used to install the feature on individual computers, or to install it on an image that will be deployed to multiple computers.

Here are some sample DISM commands to enable and get the status of the .Net Framework 3.5 feature:

Dism /online /get-featureinfo /featurename:NetFx3

Dism /online /enable-feature /featurename:NetFx3 /All

Dism /online /enable-feature /featurename:NetFx3 /All /LimitAccess /Source:x:\sources\sxs

Use /All to enable all parent features of the specified feature

Use /LimitAccess to prevent DISM from contacting WU/WSUS

Use /Source to specify the location of the files needed to restore the feature.

(x: is the drive letter of the installation media or mapped network share that contains a copy of the installation files)

To install using Powershell, use Server Manager cmdlets on servers to install features. (It should be noted that the Get-WindowsFeature cmdlet may not show a feature as installed unless all of its parent features are also installed):

Get-WindowsFeature –name NET-Framework-Core

Install-WindowsFeature –name NET-Framework-Core

Install-WindowsFeature –name NET-Framework-Core –source x:\sources\sxs

Similar DISM Powershell cmdlets can be used on either client or server, but note again that the Server manager cmdlets above are recommended on server..

Get-WindowsOptionalFeature –Online –FeatureName NetFx3

Enable-WindowsOptionalFeature –Online –FeatureName NetFx3 –All

Enable-WindowsOptionalFeature –Online –FeatureName NetFx3 –All -LimitAccess -Source x:\sources\sxs

Windows features that require .Net 3.5

As of this writing, the only Windows feature I am aware of that requires .Net 3.5 (beyond those that are specific to .Net 3.5 and ASP.Net 3.5) is Powershell 2.0. If you need it for script compatibility, you will need to install .Net 3.5 to get Powershell 2.0 functionality. On Windows 8 client, Windows PowerShell 2.0 is already enabled and you simply need to enable .Net 3.5 to use it. On server, you will need to install the Windows PowerShell 2.0 Engine, which will prompt you to also install .Net 3.5.

Applications

The automatic installation trigger I alluded to earlier requires a discussion about installing applications.

As mentioned before, many applications rely on older versions of the .Net Framework. In fact, some current versions of Microsoft products including Microsoft SQL and some System Center products use older versions.

Many applications perform checks during their installation to verify that the required version of the .Net Framework is installed. If not, they install the necessary version, often using a redistributable package available from Microsoft. However, the methods that application installers use to verify and install .Net can vary. The experience of installing apps in Windows 8 might have unexpected results because of how external attempts to install .Net are now intercepted and handled by Windows.

Installation of the new.Net Framework 3.5 FoD package can be triggered automatically in Windows 8 in the following scenarios:

· You attempt to install .Net 2.0, 3.0, or 3.5 using a redistributable package available for download from Microsoft.

· An application attempts to install one of the redistributable packages for a required version during its own installation process.

· An application that requires a legacy version, executes without preinstalling the required version.

In each of these cases, an application shim in Windows 8 intercepts the attempt and invokes the installation of the new .Net 3.5 feature. Once triggered, the installation should proceed as if it was initiated from the UI, DISM, or Powershell.

The classic "Your mileage may vary" disclaimer applies here because we really don't know how all applications will react to the shim intercepting the installation attempt. Also, some apps look for the existence of certain files to verify the installation of the desired .Net version. Such app installations may fail if .Net 3.5 was preinstalled because some files previously present on the older versions have been deprecated in Windows 8. We are however testing many frequently used apps, and in some cases introducing pre-app shims if they are detecting .NET inappropriately.

To avoid problems with applications that need it, it is best to enable the new .Net 3.5 feature before installing your app.

Recommendations

As with anything else, a little planning can go a long way. Here are the steps I recommend you consider in order of importance:

A. Understand and evaluate the requirements of your applications. Don't just assume that every computer will require .Net 3.5 and enable it on all of your images. Doing this will defeat the potential benefits of having the payload removed. By the same token, if you deploy Windows 8 with the intention of dealing with the users and computers that need .Net 3.5 as they come up, you are setting yourself and your helpdesk up for a lot of unnecessary hassle. Install all of your common LOB and infrastructure apps one at a time in a test environment without .Net 3.5 and observer whether they attempt to install an older version of the Framework, and in general how they function. Then enable .Net 3.5 and make sure all apps function as expected.

B. Include .Net 3.5 in images that will be deployed to users that will likely be running apps that require it.

C. Copy the files from the installation media to a network share to so they can be referenced by manual installation attempts and optionally by the new Group Policy setting.

D. Regardless of whether your environment includes WSUS servers, treat configuring the new Group Policy setting as a requirement. You should determine how you want to handle attempts to download not only .Net 3.5, but also other Features on demand. Additionally, as the name implies, the policy also tells Windows where to obtain files to repair the feature store.

E. Create and maintain a Windows Image that can be referenced by the Group Policy setting for ongoing FoD and feature store repair downloads.

F. Experiment with the DISM Powershell cmdlets to be prepared to deploy Features on Demand remotely to machines as needed.

Misc. Considerations

Upgrading from Windows 7 / Windows Server 2008 R2

When a PC running Windows 7 (which includes .NET Framework 3.5.1 by default) is upgraded to Windows 8, or when a server running Windows Server 2008 R2 (which has .NET Framework 3.5.1 feature installed) is upgraded to Windows Server 2012, .NET Framework 3.5 is enabled automatically using the files on the installation media. The purpose of this is to increase the chances that all apps will work as they did before the upgrade.

Multilingual images

For images that will support more than one language, it is best to add the .NET Framework 3.5 binaries before adding any language packs. This order ensures that .NET Framework 3.5 language resources are installed correctly within the reference image and available to users and applications.

Common Problems

Here, I've included three issues that you might run into when installing .Net 3.5. The hex error code is the most recognizable part and uniquely identifies each error. I've also included variations on text that might accompany the error code to increase the discoverability of this post. The text will vary, depending on whether it occurs on Windows 8 or Windows Server 2012, whether you tried to install it using the UI, DISM, or PowerShell, and on the build (Developer Preview, Consumer Preview, Release Preview, etc.):

0x800F0906 - CBS_E_DOWNLOAD_FAILURE

Windows couldn't complete the requested changes.

Windows couldn't connect to the Internet to download necessary files. Make sure you are connected to the Internet, and press 'Retry' to try again.

Windows couldn't connect to the Internet to download necessary files. Make sure you're connected to the Internet, and press 'Retry' to try again.

Error code: 0x800F0906

 

The request to add or remove features on the specified server failed.

Installation of one or more roles, roles services, or features failed.

The source files could not be downloaded.

The source files could not be found.

 

Use the /source option to specify the location of the files that are required to restore the feature. The file location should be either the root directory of a mounted image or a component store that has the Windows Side-by-Side directory as an immediate subfolder.

 

Use the "source" option to specify the location of the files that are required to restore the feature. For more information on specifying a source location, see http://go.microsoft.com/fwlink/?LinkId=243077.

 

Try installing the roles, role services, or features again in a new Add Roles and Features Wizard session, and on the Confirmation page of the wizard, click "Specify an alternate source path" to specify a valid location of the source files that are required for the installation. The location must be accessible by the computer account of the destination server.

This is the most common issue that occurs when attempting to install .Net 3.5. It stems from the machine or user not having access to a configured payload source.

To resolve this issue, make sure the machine and user have access to Windows Update, or that you have configured an alternate source to which the user and machine have access in the Group Policy setting. If machines in your environment are configured to use WSUS, make sure you have configured the Group Policy setting to direct download requests to WU or an alternate source. If it continues to fail, use the DISM command or Powershell cmdlet to install the feature, pointing to a local installation source.

0x800F081F - CBS_E_SOURCE_MISSING

 

(The text for this message is often similar to that of the 0x800F0906 error above.)

This can occur when trying to install from an installation source that is corrupt, incomplete, or invalid for the installed build (for example trying to use a Windows 8 Developer Preview source for a Consumer Preview or RTM installation). Also, make sure the full path is correct (x:\sources\sxs) and that the user has at least READ access to the location. Try to access the source directly as the installing user from the affected machine.

0x800F0907 - CBS_E_GROUPPOLICY_DISALLOWED

 

DISM failed. No operation was performed. For more information, review the log file. The DISM log file can be found at %WINDIR%\logs\DISM\dism.log.

Due to network policy settings, Windows couldn't connect to the Internet to download files required to complete the requested changes. Please contact your network administrator for more information.

This might occur if Group Policy has been configured to prevent installing .NET Framework 3.5 from Windows Update. To resolve this, configure an alternate source in the new Group Policy setting.

References

How to Enable or Disable Windows Features

(How to) Configure a Windows Repair Source

Microsoft .Net

.Net Framework Developer Center

.NET Framework 3.5.1 Features Overview

Deployment Image Servicing and Management (DISM) Technical Reference

How to Mount and Modify an Image

How the Windows Update client determines which proxy server to use to connect to the Windows Update Web site

 

Jim Martin

Senior Support Escalation Engineer

Microsoft Platforms Core Support

posted by LifeisSimple
2013. 2. 20. 14:09 Brain Trainning/DataBase

DB 서버 전체 간단한 복구 방법

  • MSSQL 을 기존 서버와 동일한 위치에 설치 (드라이브 폴더 모두)
    - master DB에는 각 DB들의 기존 정보가 들어 있기 때문에 Restore 하고 난 이후에 서비스를 올리면 기존 위치에서 DB들을 찾게 됨.
    - 새로 설치되는 DB는 기존 서버의 버전과 동일해야 master 가 복원됨
  • SQL Server 서비스를 Single 모드로 실행 -m 옵션
  • Restore Commander 로 Master DB를 복원 (미리 동일한 드라이브 및 위치에 MDF, LDF 들이 있으면 master 복구후에 바로 DB들을 서비스 할 수 있음)
  • SQL Server를 정상시작
  • msdb 복원.
  • 사용자 DB파일들이 만일 기존 서버와 동일한 위치라면 별도 작업 필요 없음.


msdb 를 복원하고 나서 이런 오류가 뜬다면.. 

Service Broker is disabled in MSDB or MSDB failed to start

이런 오류를 만나면 아래의 내용을 참고하면 됨. (아래의 링크는 접기 안에 내용을 첨부했습니다.)

출처 : http://www.lifeasbob.com/code/kb_articles.aspx?article_Id=60&CatId=39


posted by LifeisSimple
2013. 2. 5. 11:28 Brain Trainning/DataBase

2000 버전에서의 Sysfiles 정보


Contains one row for each file in a database. This system table is a virtual table; it cannot be updated or modified directly.

Column nameData typeDescription
fileidsmallintFile identification number unique for each database.
groupidsmallintFilegroup identification number.
sizeintSize of the file (in 8-KB pages).
maxsizeintMaximum file size (in 8-KB pages). A value of 0 indicates no growth, and a value of -1 indicates that the file should grow until the disk is full.
growthintGrowth size of the database. A value of 0 indicates no growth. Can be either the number of pages or the percentage of file size, depending on value of status. If status contains 0x100000, then growthis the percentage of file size; otherwise, it is the number of pages.
statusintStatus bits for the growth value in either megabytes (MB) or kilobytes (K).

0x1 = Default device.
0x2 = Disk file.
0x40 = Log device.
0x80 = File has been written to since last backup.
0x4000 = Device created implicitly by the
CREATE DATABASE statement.
0x8000 = Device created during database
creation.
0x100000 = Growth is in percentage, not pages.

perfintReserved.
namenchar(128)Logical name of the file.
filenamenchar(260)Name of the physical device, including the full path of the file.

posted by LifeisSimple
2013. 1. 23. 14:28 Brain Trainning/DataBase


sp_BLITZ™ – SQL Server Takeover Script

When people hand us a SQL Server and ask us to take it over, we need to do a fast health check.  We need to put on our latex gloves, do a little poking around, and discover the dirty secrets before they catch us by surprise.  That’s where our sp_Blitz™ script comes in: it helps you rapidly assess configuration, security, health, and performance issues.  We organize the results in a prioritized list and give you simple explanations of each risk, plus a column of links that you can copy/paste into your browser for more information about each issue.

Download the sp_Blitz™ code here, and then you can copy/paste it into SSMS.  Run it to create the stored procedure, and then just EXEC sp_Blitz to run the checks.  On larger servers, it may take a couple of minutes for some of the more detailed tests, like untrusted foreign keys, but this shouldn’t cause any blocking.

Here’s a 4-minute video explaining how to use it:

WHAT THE PARAMETERS DO

@CheckUserDatabaseObjects – if 1, we review the user databases looking for things like heaps and untrusted foreign keys.  If your databases have more than a few thousand objects, this may require additional processing time.

@CheckProcedureCache – if 1, we grab the top 20-50 resource-intensive plans from the cache and analyze them for common design issues.  We’re looking for missing indexes, implicit conversions, user-defined functions, and more.  This fast scan isn’t incredibly detailed – we’re just looking for queries that might surprise you and require some performance tuning.

@OutputProcedureCache – if 1, we output a second result set that includes the queries, plans, and metrics we analyzed.  You can do your own analysis on these queries too looking for more problems.

@CheckProcedureCacheFilter – can be CPU, Reads, Duration, ExecCount, or null.  If you specify one, we’ll focus the analysis on those types of resource-intensive queries (like the top 20 by CPU use.)  If you don’t, we analyze the top 20 for all four (CPU, logical reads, total runtime, and execution count).  Typically we find that it’s not 80 different queries – it’s usually 25-40 queries that dominate all of the metrics.

@OutputType – if ‘TABLE’, we dump out the findings in, well, a table.  If ‘COUNT’, a vampire comes to your door and – wait, I’m hearing that’s not quite right. If ‘COUNT’, we output a single result row with the number of problems found.  You can use this in conjunction with monitoring software to find out when somebody’s been sleeping in your bed.

MORE FUN LINKS

출처 : http://www.brentozar.com/blitz/

posted by LifeisSimple
2013. 1. 22. 17:32 Brain Trainning/DataBase


General SQL Server Performance Tuning Tips

BY BRAD MCGEHEE

When your transaction log grows large and you want a quick way to shrink it, try this option. Change the database recovery mode of the database you want to shrink from “full” to “simple,” then truncate the log file by performing a full backup of the database, then switch back to the “full” recovery mode. By temporally changing from the Full recovery model to the Simple recovery mode, and then back, SQL Server will only keep the “active” portion of the log, which is very small.  Contributed by Tom Kitta.  

*****

If you need to delete all the rows in a table, don’t use DELETE to delete them, as the DELETE statement is a logged operation and can take a significant amount of time, especially if the table is large. To perform the same task much faster, use the TRUNCATE TABLE instead, which is not a logged operation. Besides deleting all of the records in a table, this command will also reset the seed of any IDENTITY column back to its original value.

After you have run the TRUNCATE TABLE statement, it is important then to manually update the statistics on this table using UPDATE STATISTICS. This is because running TRUNCATE TABLE will not reset the statistics for the table, which means that as you add data to the table, the statistics for that table will be incorrect for a time period. Of course, if you wait long enough, and if you have Auto Update Statistics turned on for the database, then the statistics will eventually catch up with themselves. But this may not happen quickly, resulting in slowly performing queries because the Query Optimizer is using out-of-date statistics.  

*****

If you use TRUNCATE TABLE instead of DELETE to remove all of the rows of a table, TRUNCATE TABLE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE. Here’s a generic script that will drop all existing Foreign Key constraints on a specific table:

CREATE TABLE dropping_constraints 

cmd VARCHAR(8000) 
)

INSERT INTO dropping_constraints 
SELECT 
‘ALTER TABLE [' + 
t2.Table_Name + 
'] DROP CONSTRAINT ‘ + 
t1.Constraint_Name 
FROM 
INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS t1 
INNER JOIN 
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE t2 
ON 
t1.CONSTRAINT_NAME = t2.CONSTRAINT_NAME 
WHERE t2.TABLE_NAME=’your_tablename_goes_here’ 
DECLARE @stmt VARCHAR(8000) 
DECLARE @rowcnt INT 
SELECT TOP 1 @stmt=cmd FROM dropping_constraints 
SET @rowcnt=@@ROWCOUNT 
WHILE @rowcnt<>0 
BEGIN 
EXEC (@stmt) 
SET @stmt = ‘DELETE FROM dropping_constraints WHERE cmd =’+ QUOTENAME(@stmt,””) 
EXEC (@stmt) 
SELECT TOP 1 @stmt=cmd FROM dropping_constraints 
SET @rowcnt=@@ROWCOUNT 
END 
DROP TABLE dropping_constraints

The above code can also be extended to drop all FK constraints in the current database. To achieve this, just comment the WHERE clause.  

*****

Don’t run a screensaver on your production SQL Servers, it can unnecessarily use CPU cycles that should be going to your application. The only exception to this is the “blank screen” screensaver, which is OK to use.  

*****

Don’t run SQL Server on the same physical server that you are running Terminal Services, or Citrix software. Both Terminal Services and Citrix are huge resource hogs, and will significantly affect the performance of SQL Server. Running the administrative version of Terminal Services on a SQL Server physical server, on the other hand, is OK, and a good idea from a convenience point of view. As is mentioned in other parts of this website, ideally, SQL Server should run on a dedicated physical server. But if you have to share a SQL Server with another application, make sure it is not Terminal Services or Citrix.  

*****

Use sp_who or sp_who2 (sp_who2 is not documented in the SQL Server Books Online, but offers more details than sp_who) to provide locking and performance-related information about current connections to SQL Server. Sometimes, when SQL Server is very busy, you can’t use Enterprise Manager or Management Studio to view current connection activity via the GUI, but you can always use these two commands from Query Analyzer or Management Studio, even when SQL Server is very busy.  

*****

SQL Server uses its own internal thread scheduler (called the UMS) when running in either native thread mode or in fiber mode. By examining the UMS’s Scheduler Queue Length, you can help determine whether or not that the CPU or CPUs on the server are presenting a bottleneck.

This is similar to checking the System Object: Processor Queue Length in Performance Monitor. If you are not familiar with this counter, what this counter tells you is how many threads are waiting to be executed on the server. Generally, if there are more than two threads waiting to be executed on a server, then that server can be assumed to have a CPU bottleneck.

The advantage of using the UMS’s Schedule Queue Length over the System Object: Processor Queue Length is that it focuses strictly on SQL Server threads, not all of the threads running on a server.

To view what is going on inside the UMS, you can run the following undocumented command:

DBCC SQLPERF(UMSSTATS)

For every CPU in your server, you will get Scheduler. Each Scheduler will be identified with a number, starting with 0. So if you have four CPUs in your server, there will be four Schedulers listed after running the above command, Schedulers ID 0 through 3.

The “num users” tells you the number of SQL threads there are for a specific scheduler.

The “num runnable,” or better known as the “Scheduler Queue Length,” is the key indicator to watch. Generally, this number will be 0, which indicates that there are no SQL Server threads waiting to run. If this number is 2 or more, this indicates a possible CPU bottleneck on the server. Keep in mind that the values presented by this command are point data, which means that the values are only accurate for the split second when they were captured, and will be always changing. But if you run this command when the server is very busy, the results should be indicative of what is going on at that time. You may want to run this command multiple time to see what is going on over time.

The “num workers” refers to the actual number of worker threads there are in the thread pool.

The “idle workers” refers to the number of idle worker threads.

The “cntxt switches” refers to the number of context switches between runnable threads.

The “cntxt switches(idle)” refers to the number of context switches to “idle” threads.

As you can see, this command is for advanced users, and is just one of many tools that can be used to see internally how SQL Server is performing.  

Continues…

SQL Server  offers a system table-valued function that provides statistical information on the I/O activity of specific database files. It is called fn_virtualfilestats. What is interesting about this function is that it is specific to a database file. For example, if you want to see the I/O activity of a particular user database, you can. Keep in mind that all SQL Server databases have at least two physical files (for the database and log), and can have many more, depending on how the database was created. When using this function, you have to not only specify the database, but the file within the database. This is very specific I/O statistics indeed. To run this function, use the syntax found below:

SELECT * FROM :: fn_virtualfilestats(dbid, fileid)

Where:

dbid: The database ID of the database you want to examine.

fileid: The file ID of the physical files that make up your database. At a minimum, each database has at least two files: the database file (MDF file) and a log file (LDF). A database can have many files, and the file number refers to the number of the physical file that you want to examine.

To identify the dbid and fileid you want to use in the above statement, run the following SELECT statement. It will provide you with a list of all of the database names, database ids, and file ids (for each database) on your server.

SELECT sysdatabases.name AS Database_Name, 
     sysdatabases.dbid AS Database_ID, 
     sysaltfiles.fileid AS File_ID 
FROM sysaltfiles INNER JOIN sysdatabases 
     ON sysaltfiles.dbid = sysdatabases.dbid 
ORDER BY sysdatabases.name, sysaltfiles.fileid

Once you run this function, a wide variety of information is available, including:

NumberReads: The number of physical reads made against this file since the last time SQL Server was restarted.

NumberWrites: The number of physical writes made against this file since the last time SQL Server was restarted.

BytesRead: The number of bytes read from this file since the last time SQL Server was restarted.

BytesWritten: The number of writes to this file since the last time SQL Server was restarted.

IoStallMS: The total amount of time that users have waited for I/Os to complete for this file (in milliseconds).

The first four statistics can give you a feel for how busy a particular file is. This can come in handy when comparing multiple filegroups in a database and to see how balanced the I/O is to each file. To make the most of filegroups, I/O should be spread out among the various files for best overall performance. The last statistic, IoStallMS, is best used to find out if you have a bottleneck in your transaction log, as demonstrated below:

SELECT IoStallMS / (NumberReads+NumberWrites) as IsStall 
FROM :: fn_virtualfilestats(dbid, fileid)

Where:

dbid: The database ID of the database you want to examine.

fileid: The file ID of the transaction log of the database being examined.

Essentially, if IsStall is > 20ms, then this indicates that the I/O to the transaction log is becoming a bottleneck, which in turn can lead to major concurrently problems in the database.

To help optimize the transaction log, assuming it has become a bottleneck, consider doing the following:

• Place the transaction log on a faster disk array. 
• Place the transaction log on a dedicated disk array (no other files other than the transaction log). This allows sequential writes to occur as fast as possible, helping to boost I/O performance. 
• Turn on write-back caching on your disk controller, but only if it is backed up with a battery and has been tested to work with SQL Server. Normally, write-back caching is turned off because it can cause database corruption should the server crash. 
 

*****

Sometimes, a user thread has to wait until the resources it needs are available. Hopefully, this won’t happen often, but it is a fact of life. But sometimes, long waits can indicate potential performance problems that can be corrected, if you know where to look. Long waits can be caused by blocking locks, slow I/O, and other factors.

Fortunately, you can access the amount of time a user thread has to wait, which can tell you which user thread, if any, is taking more time that it should. For example, the query below can be run to identify any user threads that have to wait more than one second:

SELECT spid, waittime, lastwaittype 
FROM master..sysprocesses 
WHERE waittime > 1000

When you run the above query, all of the processes that have been waiting for greater than 1 second (1000 milliseconds) — both system and user — will be displayed. You will want to ignore system processes, focusing your efforts on spids that represent specific user threads. Spid will give you the user ID (you will have to match the spid to the appropriate user), waittime is the number of milliseconds that this user thread has been waiting, and lastwaittype will give you a clue as to what is waiting to occur.

In the above query, I have used 1000 milliseconds as a cutoff point, but you can use any amount of time you want. Generally speaking, any user thread that has been waiting for five seconds or more should definitely be evaluated for potential performance issues. Of course, you can choose any threshold you want.  

Continues…

By default, you cannot use a UNC (Universal Naming Convention) name to specify a location of where to store a SQL Server database or log file. Instead, you must specify a drive letter that refers to a local physical drive or array. But what if you want to store your database or log file on another Windows Server or a Network Appliance, Inc. storage system? You can, but you will have to set Trace Flag 1807 on your SQL Server to allow the use of UNC names.

*****

For a quick and dirty way to check to see if your SQL Server has maxed out its memory (and is causing your server to page), try this. Bring up the Task Manager and go to the “Performance” tab.

Here, check out two numbers: the “Total” under “Commit Charge (k)” and the “Total” under “Physical Memory (k)”. If the “Total” under “Commit Charge (k)” is greater than the “Total” under “Physical Memory (k)”, then your server does not have enough physical memory to run efficiently as it is currently configured and is most likely causing your server to page unnecessarily. Excess paging will slow down your server’s performance.

Another number to make note of is the “Available Physical Memory (K). This number should be 4MB or higher. If it is not, then your SQL Server is most likely suffering from a lack of physical RAM, hurting performance, and more RAM needs to be added.

If you notice this problem, you will probably want to use System Monitor to further investigate the cause of this problem. You will also want to check to see how much physical memory has been allocated to SQL Server. Most likely, this setting has been set incorrectly, and SQL Server has been set to use too much physical memory. Ideally, SQL Server should be set to allocate physical RAM dynamically.  

*****

When performance tuning a SQL Server, it is often handy to know if the disk I/O of your servers (and the databases on it) are mostly reads or mostly writes. This information can be used to calculate the ratio of writes to reads of your server, and this ratio can affect how you might want to tune your SQL Server. For example, if you find that your server is heavy on the writes, then you will want to avoid RAID 5 if you can, and use RAID 10 instead. This is because RAID 5 is much less efficient that RAID 10 at writes. But if your server has a much greater number of reads than writes, then perhaps a RAID 5 system is more than adequate.

One of the quickest ways to find out the ratio of reads to writes on your SQL Servers is to run Task Manager and look at the sqlservr.exe process (this is the mssqlserver or sqlserv service) and view the total number of I/O Read Bytes and I/O Write Bytes. If you don’t see this in Task Manager, go to View|Select Column, and add these two columns to Task Manager.

The results you see tell you how many bytes of data have been written and read from the SQL Server service since it was last restarted. Because of this, you don’t want to read this figure immediately after starting the SQL Server service, but after several days of typical use.

In one particular case I looked at, the SQL Server had 415,006,801,908 I/O bytes read and 204,669,746,458 bytes written. This server had about one write for every two reads. In this case, RAID 5 is probably a good compromise in performance, assuming that RAID 10 is not available from a budget perspective. But if the reverse were true, and there were two writes for every one read, then RAID 10 would be needed for best overall performance of SQL Server.  

*****

Internet Information Server (IIS) has the ability to send its log files directly to SQL Server for storage. Busy IIS servers can actually get bogged down trying to write log information directly to SQL Server. Because of this, it is generally not recommended to write web logging information to SQL Server. Instead, logs should be written to text files, and later imported into SQL Server using BCP, DTS, or SSIS.  

*****

SQL Server has a database compatibility mode that allows applications written for previous versions of SQL Server to run under the current version of SQL Server. In you want maximum performance for your database, you don’t want to run your database in compatibility mode (not all new performance-related features are supported).

Instead, your databases should be running in the current native SQL Server mode. Of course, this may require you to modify your application to make it compliant to the version of SQL Server you are running, but in most cases, the additional work required to update your application will be more than paid for with improved performance.  

*****

When experimenting with the tuning of your SQL Server, you may want to run the DBCC DROPCLEANBUFFERS command to remove all the test data from SQL Server’s data cache (buffer) between tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.

If you want to clear out all of the stored procedure cache, use this command, DBCC FREEPROCCACHE. If you only want to clear out the stored procedure cache for a single database (not the entire server) use DBCC FLUSHPROCINDB. All of these commands are for testing purposes and should not be run on a production SQL Server.  

*****

Orphan SQL Server sessions can negatively affect SQL Server’s performance. An orphan SQL Server session can occur when a client improperly disconnects from SQL Server, such as when the client loses power. When this happens, the client cannot tell SQL Server to properly close the connection, so the SQL Server connection remains open, even though it is not being used.

This can affect SQL Server’s performance two ways. First, orphaned connections use up SQL Server connections, which takes up server resources. Secondly, it is possible that the orphan connections may be holding locks that block other users; or temp tables or cursors may be held open that also take up unnecessary server resources.

The OS periodically checks for inactive SQL Server sessions, and if it finds any, it will notify SQL Server so that the connection can be removed. Unfortunately, this only occurs every 1-2 hours, depending on the protocol used. If orphaned SQL Server sessions become a problem, Windows Server’s registry can be modified so that it checks more often for orphaned connections.

Identifying an orphaned connection from SQL Server is very difficult, but if you can identify it, it can be removed by KILLing the connection using SSMS or by running the KILL statement.  

Continues…

For best performance, don’t mix production databases and development (test or staging) databases on the same physical server. This not only serves to better separate the two functions (production and development), but prevents developers from using up server resources that could be better used by production users.  

*****

When we think of performance, we usually think about speeding up our application’s performance. But another way to look at performance is to look at our performance as DBAs or Transact-SQL developers.

For example, one of the easiest ways to speed up our Transact-SQL coding, in addition to making it easier to maintain and troubleshoot our code once it is written, is to format it in an easy to read format.

While there are many different code formatting guidelines available, here are some basic ones you should consider following, if you aren’t doing so already:

Begin each line of your Transact-SQL code with a SQL verb, and capitalize all Transact-SQL statements and clauses, such as:

SELECT customer_number, customer_name 
FROM customer 
WHERE customer_number > 1000 
ORDER BY customer_number

If a line of Transact-SQL code is too long to fit onto one line, indent the following line(s), such as:

SELECT customer_number, customer_name, customer_address, 
     customer_state, customer_zip, customer_phonenumber

Separate logical groupings of Transact-SQL code by using appropriate comments and documentation explaining what each grouping goes.

These are just a few of the many possible guidelines you can follow when writing your Transact-SQL code to make it more readable by you and others. You just need to decide on some standard, and then always follow it in your coding. If you do this, you will definitely boost your coding performance.

*****

Be wary of allowing users to directly access your databases (especially OLTP databases) with third-party database access tools, such as Microsoft Excel or Access. Many of these tools can wreck havoc with your database’s performance. Here are some reasons why:

• Often these users aren’t experienced using these tools, and create overly complex queries that eat up server resources. At the other extreme, their queries may not be complex enough (such as lacking effective WHERE clauses) and return thousands, if not millions, or unnecessary rows of data. 
• This reporting activity can often lock rows, pages or tables, creating user contention for data and reducing the database’s performance. 
• These tools are often file-based. This means that even if an effective query is written, the entire table (or multiple tables in the case of joins) has to be returned to the client software where the query is actually performed, not at the server. Not only can this lead to excess server activity, it can also play havoc with your network.

If you have no choice but to allow users access to your data, try to avoid them hitting your production OLTP databases. Instead, point them to a “reporting” server that has been replicated, or is in the form of a datamart or data warehouse.  

*****

SQL Server   offers support of SSL encryption between clients and the server. While selecting this option prevents the data from being viewed, it also adds additional overhead and reduces performance. Only use SSL encryption if absolutely required. If you need to use SSL encryption, consider purchasing a SSL encryption processor for the server to speed performance.  

*****

SQL Server   supports named instances of SQL Server. You can run up to 16 concurrent instances of SQL Server   on the same server.

As you might imagine, each running instance of SQL Server takes up server resources. Although some resources are shared by multiple running instances, such as MSDTC and the Microsoft Search services, most are not. Because of this, each additional instance of SQL Server running on the same server has to fight for available resources, hurting performance.

For best performance, run only a single instance (usually the default) on a single physical server. The main reasons for using named instances is for upgrading older versions of SQL Server to newer versions of SQL Server, transition periods where you need to test your applications on multiple versions of SQL Server, and for use on development servers.  

*****

If you run the ALTER TABLE DROP COLUMN statement to drop a variable length or text column, did you know that SQL Server will not automatically reclaim this space after performing this action? To reclaim this space, which will help to reduce unnecessary I/O due to the wasted space, you can run the following command:

DBCC CLEANTABLE (database_name, table_name)

Before running this command, you will want to read about it in Books Online to learn about some of its options that may be important to you.

*****

Trace flags, which are used to enable and disable some special database functions temporarily, can sometimes chew up CPU utilization and other resources on your SQL Server unnecessarily. If you just use them for a short time to help diagnose a problem, for example, and then turn them off as soon as you are done using them, then the performance hit you experience is small and temporary.

What happens sometimes is that you, or another DBA, turns on a trace flag, but forgets to turn it off. This of course, can negatively affect your SQL Server’s performance. If you want to check to see if there are any trace flags turned on a SQL Server, run this command in Query Analyzer:

DBCC TRACESTATUS(-1)

If there are any trace flags on, you will see them listed on the screen after running this command. DBCC TRACESTATUS only finds traces created at the client (connection) level. If a trace has been turned on for an entire server, this will not show up.

If you find any, you can turn them off using this command:

DBCC TRACEOFF(number of trace)

*****

SQL Server offers a feature called the black box. When enabled, the black box creates a trace file of the last 128K worth of queries and exception errors. This can be a great tool for troubleshooting some SQL Server problems, such as crashes.

Unfortunately, this feature uses up SQL Server resources to maintain the trace file than can negatively affect its performance. Generally, you will only want to turn the black box on when troubleshooting, and turn it off during normal production. This way, your SQL Server will be minimally affected.  

Continues…

If you have ever performed a SELECT COUNT(*) on a very large table, you know how long it can take. For example, when I ran the following command on a large table I manage:

SELECT COUNT(*) from <table_name>

It took 1:09 to count 10,725,948 rows in the table. At the same time, SQL Server had to perform a lot of logical and physical I/O in order to perform the count, chewing up important SQL Server resources.

A much faster, and more efficient, way of counting rows in a table is to run the following query:

SELECT rows 
FROM sysindexes 
WHERE id = OBJECT_ID(‘<table_name>’) AND indid < 2

When I run the query against the same table, it takes less than a second to run, and it gave me the same results. Not a bad improvement and it took virtually no I/O. This is because the row count of your tables is stored in the sysindexes system table of your database. So instead of counting rows when you need to, just look up the row count in the sysindexes table.

There is one potential downside to using the sysindexes table. And that this system table is not updated in real time, so it might underestimate the number of rows you actually have. Assuming you have the database option turned on to “Auto Create Statistics” and “Auto Update Statistics,” the value you get should be very close to being correct, if not correct. If you can live with a very close estimate, then this is the best way to count rows in your tables.

*****

Do you use Enterprise Manager to access remote servers, possibly over a slow WAN link? If you do, have you ever had any problems getting SSMS to connect to the remote server?

If so, the problem may lay in the fact that if Enterprise Manager cannot make a connection within 4 seconds, then the connection attempt fails. To overcome slow network connections, you can change the default SSMS timeout value from 4 seconds to any amount of time you like.

To change the default timeout value, select Tools > Options from Enterprise Manager, and then select   the “Advanced” tab. Here, change the “Login time-out (seconds)” option to a higher number.  

*****

SQLDIAG.exe is a command line tools that collects information about SQL Server and writes it to a text file. It can be useful for documenting or troubleshooting your SQL Server. When you run this command when SQL Server is running, the following information is collected and stored in a text file called sqldiag.txt, which is stored in the mssqllog folder.

The text of all error logs 
SQL Server registry information 
SQL Server dll version information 
The output from these system stored procedures:

sp_configure 
sp_who 
sp_lock 
sp_helpdb 
xp_msver 
sp_helpextendedproc 
sysprocesses 
Input buffer SPIDs/deadlock information 
Microsoft diagnostics report for the server 
The last 100 queries and exceptions (if the query history trace was running) 
 

*****

Memory leaks can steal valuable memory from your SQL Server, reducing performance, and perhaps even forcing you to reboot your server. A memory leak occurs when a poorly-written or buggy program requests memory from the operating system, but does not release the memory when it is done with it. Because of this, the application can use up more and more memory in a server, greatly slowing it down, and even perhaps crashing the server.

Some memory leaks come from the operating system itself, device drivers, MDAC components, and even SQL Server. And of course, virtually any application can cause a memory leak, which is another good reason to dedicate a single server to SQL Server instead of sharing it among multiple applications.

Memory leaks are often hard to identify, especially if they leak memory slowly. Generally, memory leaks become apparent when you notice that your server is running out of available memory and paging becomes a big problem. A symptom of this is a SQL Server that runs quickly after being rebooted, but begins to run more and more slowly as time passes, and when the system is rebooted again, it speeds up again.

One way to help get rid of many memory leaks is to ensure that you always have the latest service packs or updates for your server’s software. But a memory leak you find may not have an immediate fix. If this is the case, you may be forced to reboot your server periodically in order to free up memory.

Identifying what is causing a memory leak is often difficult. One method involved using System Monitor to monitor all of the counters in the Memory object over time, seeing what is happening internally in your computer. Another method is to use Task Manager to view how much memory is used by each process. A process that seems to be using an unusual amount of memory may be the culprit.

*****

While SSMS can make some DBA and developer tasks a little bit easier, in other cases, it can cause performance problems. For example, if you decide to change the schema of a table from within SSMS, you may find out that doing so takes much longer than performing the same schema change by using the Transact-SQL ALTER command. This is because SSMS  may not use ALTER, but instead recreate an entire new table based on your new schema, and the move the data from the old table to the new table, which can take a long time for large tables. Generally, using ALTER is very efficient and schema changes occur quickly, even on large tables.

Because of this, consider using ALTER to modify the schemas of very large tables instead of SSMS.  

*****

If you want to see a detailed accounting of how memory is being used in your SQL Server, run this undocumented DBCC command:

DBCC MEMORYSTATUS

The results look similar to this:

Buffer Distribution          Value 
—————————— ———– 
Stolen                         2914 
Free                         3049 
Procedures                    43521 
Inram                         0 
Dirty                         8366 
Kept                         1209 
I/O                         0 
Latched                    0 
Other                         163981


Buffer Counts               Value 
—————————— ———– 
Commited                    223040 
Target                         223040 
Hashed                         173556 
InternalReservation          202 
ExternalReservation          0 
Min Free                    201


Procedure Cache               Value 
—————————— ———– 
TotalProcs                    28301 
TotalPages                    43521 
InUsePages                    19676


Dynamic Memory Manager     Value 
—————————— ———– 
Stolen                         46436 
OS                         507 
General                    7137 
QueryPlan                    39310 
Optimizer                    0 
Utilities                    13 
Connection                    483


Global Memory Objects          Value 
—————————— ———– 
Resource                    932 
Locks                         78 
XDES                         1 
SQLCache                    1195 
Replication                    2 
LockBytes                    2 
ServerGlobal               71


Query Memory Manager          Value 
—————————— ———– 
Grants                         0 
Waiting                    0 
Available                    150336 
Maximum                    150336


Optimization Queue          Value 
—————————— ———– 
Optimizing                    0 
Waiting                    0 
Available                    16 
Maximum                    16

The value refers to the number of 8K buffers. [7.0, 2000] Updated 5-7-2007

Continues…

If you would like to see what is stored in SQL Server’s stack, you can by running this undocumented SQL Server DBCC command:

DBCC STACKDUMP

When you run this command, a stack dump will be performed, writing a file named SQL00001.dmp in the SQL Server log folder. This is a text file you can view with Notepad.

*****

Many times it is useful to compare the performance of a query or stored procedure as you are testing it. This is especially true if the time differences are so small that it is hard for you to know for sure which query was faster or slower.

One of the easiest ways to measure the time it takes for a query to stored procedure to run is to find the starting and ending times, then find the difference between the two. This can be easily accomplished using a few extra lines of Transact-SQL code. Here’s how:

DECLARE @starttime datetime 
SELECT @starttime = GETDATE()

Your code here …

SELECT DATEDIFF(ms, @starttime, GETDATE()) AS ‘Total Time (ms)’

The results will be how long, in milliseconds, it took your code to run. SQL Server’s time function is only accurate to +- 3 milliseconds, so keep this mind.

*****

In SSMS, the most common way to script currently existing objects is to right-click on the database in question and select “All Tasks,” then choose “Generate SQL Script,” and so on, until you script the object you want. This works fine, but takes a lot of keystrokes.

Another way, from within SSMS, to script existing objects is to follow these steps instead:

• Open the database and right-click on the object you want to script, then click “Copy.” Or if you like keyboard shortcuts, press CTRL-C. 
• Open a copy of Query Analyzer, Notepad, or any text editor. 
• Now on the “Edit” menu from the application, click “Paste” (or press CTRL-V), and the scripted code appears, including any appropriate constraints.

This is a quick and easy way to script database objects you may find handy.

*****

Have you ever wondered how people find out about “undocumented” stored procedures? Have you ever wondered about how SQL Server works behind the scenes? And have you ever wanted to see some examples of how top-notch Transact-SQL developers write SQL code?

All of the above is at your fingertips, although you won’t see many references to it in Microsoft’s documentation. To accomplish all of the above, all you have to do is go to your master SQL Server database in SSMS, open the Stored Procedures Folder, and take a look. All SQL Server stored procedures can be seen here, even the ones not documented by Microsoft. To view the Transact-SQL code in a stored procedure, right-click on the stored procedure you are interested in and click Properties.

If you want to use Query Analyzer instead, all you have to do is to use this command:

USE master 
sp_helptext [system_stored_procedure_name]

In many cases, the Transact-SQL code you see is documented (although generally not in great detail), and if you know Transact-SQL, you should be able to follow the code and understand how it works. In some cases, you may need to experiment a little to better understand what is going on.

For example, here’s the Transact-SQL code for the sp_updatestats system stored procedure:

CREATE PROCEDURE sp_updatestats 
AS

DECLARE @dbsid varbinary(85)

SELECT @dbsid = sid 
FROM master.dbo.sysdatabases 
WHERE name = db_name()

/*Check the user sysadmin*/ 
IF NOT is_srvrolemember(‘sysadmin’) = 1 AND suser_sid() <> @dbsid 
BEGIN 
RAISERROR(15288,-1,-1,’SA or DBO’,'sp_updatestats’) 
RETURN (1) 
END

DECLARE @exec_stmt nvarchar(540) 
DECLARE @tablename sysname 
DECLARE @uid smallint 
DECLARE @user_name sysname 
DECLARE @tablename_header varchar(267) 
DECLARE tnames_cursor CURSOR FOR SELECT name, uid FROM sysobjects WHERE type = ‘U’ 
OPEN tnames_cursor 
FETCH NEXT FROM tnames_cursor INTO @tablename, @uid 
WHILE (@@fetch_status <> -1) 
BEGIN 
IF (@@fetch_status <> -2) 
BEGIN 
SELECT @user_name = user_name(@uid) 
SELECT @tablename_header = ‘Updating ‘ + @user_name +’.'+ RTRIM(LOWER(@tablename)) 
PRINT @tablename_header 
SELECT @exec_stmt = ‘UPDATE STATISTICS ‘ + quotename( @user_name , ‘[‘)+’.’ + quotename( @tablename, ‘[‘) 
EXEC (@exec_stmt) 
END 
FETCH NEXT FROM tnames_cursor INTO @tablename, @uid 
END 
PRINT ‘ ‘ 
PRINT ‘ ‘ 
raiserror(15005,-1,-1) 
DEALLOCATE tnames_cursor 
RETURN(0) — sp_updatestats

GO

Besides learning how system stored procedures work, and getting an opportunity to see professionally written code, (if you really like to get your hands dirty) you can also modify the code to do something slightly different. By this, I don’t mean to modify the code you see (that wouldn’t be prudent) but you can copy the code to a new stored procedure, modify it, and then save it with a different name.  

Continues…

On very rare occasions, you may get an error similar to this from SQL Server:

[Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 845: [Microsoft][ODBC SQL Server Driver][SQL Server]Time-out occurred while waiting for buffer latch type 3 for page (2:0), database ID 15.

This error message may also be accompanied by additional error messages, such as a file log or database backup that failed.

This error indicates that your disk I/O system is being overloaded and that SQL Server timed out when trying to write data to disk (via the OS). If this error is a rare event, then there is no need to worry.

But if this event becomes frequent, it could mean that your disk I/O subsystem has hit a bottleneck that it can’t overcome, or that your disk I/O subsystem itself is having problems. You need to find out which one of these is the most likely issue, and then deal with it accordingly.

One of the things that you might want to look for is what I/O activity was happening at the time of the error. For example, it is possible that a disk backup, a DBCC command, or other database activity was all happening at the same time, overwhelming the disk I/O. Perhaps the solution to the problem is as simple as rescheduling some scheduled tasks.  

*****

According to Microsoft, the way to shrink a log file is to use DBCC SHRINKFILE. This process is painfully slow and inefficient. Even after running the procedure numerous times, you may not get much free space.

Here’s another way to shrink your logs that is guaranteed to work every time. First, back up the database and then detach (sp_detach_db) it (you will have to bring the database down to do this). Next, delete the log file and then re-attach (sp_attach_db) the database, not providing the old log file location from within the sp_attach_db command. This will create a new log file with the old log file name in the old location with default size i.e. 512 KB.

To make sure there is no problem during the operation, the old log file can be renamed and kept until the database is reattached successfully. This provides a backup plan if for some reason SQL server fails to attach the database without the old log file.

This trick won’t work if the database has more than one log file, but if you need to, you can alter the database so that it only has a single log file, while will allow you to perform the above steps. After carrying out the detach and attach database activity, the database can be again altered to add more log files.   Contributed by Gaurav Bindlish.  

*****

In SQL Server’s Query Analyzer, a somewhat hidden, but useful feature exists that you may not be aware off, and that is the Transact-SQL Debugger. The reasons you may not have seen it before in Query Analyzer is that you don’t see this option in any standard menu, making it somewhat hard to find.

To use the Transact-SQL Debugger, you must first ensure that the Object Browser is displayed from within Query Analyzer. Next, using the Object Browser, open up a database that has one or more stored procedures, then right-click on the stored procedure’s name, and then select Debug. This is how you turn on debugging for a stored procedure.

Once you click on Debug, if the stored procedure you selected has one or more input parameters, you will be asked to enter them, and then you can begin the debugging process. The Transact-SQL Debugger has the typical features you find in most any debugging tool, including:

• Go: Puts the stored procedure into the debugging mode. 
• Toggle Breakpoint: Sets or removes a breakpoint at the current line in your code. You cannot set a breakpoint on lines containing non-executable code. 
• Remove All Breakpoints: Clears all the breakpoints in your code. 
• Step Into: Step Into executes the statement at the current execution point, and then executes one code statement at a time. If the statement is a call to another stored procedure, then the next statement displayed is the first statement in the stored procedure. 
• Step Over: Executes one code statement at a time. If the current statement contains a call to another stored procedure, Step Over executes the stored procedure as a unit, and then steps to the next statement in the current stored procedure. 
• Step Out: Executes the remaining lines of code in which the current execution point lies. The next statement displayed is the statement following the procedure call. All of the code is executed between the current and the final execution points. 
• Run to Cursor: Specifies a statement further down in your code where you want execution to end. 
• Restart: Restarts execution at the beginning of the stored procedure. 
• Stop Debugging: Stops debugging. 
• Watch: Displays the current watch expressions. 
• Callstack: Lists the procedures calls that have started, but have yet to be completed. 
• Auto Rollback: Automatically rolls back all work performed during execution of the stored procedure.

*****

Have you ever wanted to test the latest beta software, but didn’t want to put it on your “production” desktop because you were afraid it might crash it, and didn’t have a separate test box available to run it on? Or have you wanted more than one test environment, but didn’t have the necessary hardware?

In both of these cases, there is a new option that allows you to do both, while not requiring any more hardware. How? By using Microsoft Virtual PC 2007. This software, which is designed to run on most desktops and Microsoft operating systems, allows you to install virtual servers on your current desktop (assuming your hardware is big enough).

For example, if you are running Windows XP Workstation on your desktop, you can create one or more multiple virtual servers that run almost any other Microsoft operating system, along with your choice of application software, such as SQL Server running under Windows Server, or any combination you like.

Virtual servers don’t have to be active all the time, you can bring them up only when you want to, helping to preserve hardware resources. You can create as many virtual servers as you like, plus you can have as many running as you like, assuming your hardware will handle them.

If you decide to do this, there are two things you may want to keep in mind. First, when you create a virtual server, you must specify how the networking will work. There are several options. You may have to experiment with these options to select the one that works best for you. Also, you will want to install the optional Virtual Machine Additions into your virtual server, after it is built. This will greatly enhance the performance of the virtual software. Information on how to do this is available in the instructions.

Microsoft also offers a “server” version.  

*****

To quickly identify which, if any, of your procedure is executed when SQL Server starts, you can use the following query:

USE master 
GO 
SELECT 
name 
FROM 
sysobjects 
WHERE 
OBJECTPROPERTY(id, ‘ExecIsStartup’) = 1

However, you can also get the same result via the INFORMATION_SCHEMA views, which are generally preferable over directly querying the system tables. Here’s the query:

USE master 
GO 
SELECT 
ROUTINE_NAME 
FROM 
INFORMATION_SCHEMA.ROUTINES 
WHERE 
OBJECTPROPERTY(OBJECT_ID(ROUTINE_NAME), ‘ExecIsStartUp’) = 1

출처 : http://www.sql-server-performance.com/2007/gen-tips/7/

posted by LifeisSimple
2013. 1. 18. 17:40 Brain Trainning/DataBase

출처 : http://www.simple-talk.com/sql/learn-sql-server/how-to-track-down-deadlocks-using-sql-server-2005-profiler/


Brad McGehee

How to Track Down Deadlocks Using SQL Server 2005 Profiler
20 June 2008

It is irritating, sometimes alarming, for the user to be confronted by the 'deadlock message' when a deadlock happens. It can be a tiresome business to prevent them from occurring in applications. Fortunately, the profiler can be used to help DBAs identify how deadlocking problems happen, and indicate the best way of minimising the likelihood of their reappearance.

A Quick Introduction to Deadlocks

Deadlocking occurs when two or more SQL Server processes have locks on separate database objects and each process is trying to acquire a lock on an object that the other processes have previously locked.  For example, process one has an exclusive lock on object one, process two has an exclusive lock on object two, and process one also wants an exclusive lock on object two, and object two wants an exclusive lock on object one. Because two processes can’t have an exclusive lock on the same object at the same time, the two processes become entangled in a deadlock, with neither process willing to yield of its own accord.

 

Since a deadlock is not a good thing for an application, SQL Server is smart enough to identify the problem and ends the deadlock by choosing one process over another. It does this by killing one of the processes (usually the process that has used the least amount of server resources up to this point) and lets the other one to continue to run. The aborted transaction is rolled back and an error message is sent to the application. If the application is deadlock aware, it will resubmit the killed transaction automatically and the user may never know the deadlock happened. If the application is not deadlock aware, then most likely an error message appears on the application’s screen and you get a call from a disgruntled user. Besides irritating users, deadlocks can use up SQL Server's resources unnecessarily as transactions are killed, rolled back, and resubmitted again.

 

Deadlocks have been the bane of many a DBA. While rare for a well-designed and written application, deadlocks can be a major problem for—how can I say this delicately?—“less efficient” application code. What is even more frustrating is there is not much the DBA can to do prevent deadlocks, as the burden of preventing them in the first place is on the developers of the application. Once an application is designed and written, it is hard for the DBA to do anything other than to identify the offending code and reporting it back to the developers so it can be fixed.

 

In SQL Server 2000 and earlier, the most common way to track down deadlock issues was to use a trace flag. In SQL Server 2005, trace flags can still be used (1204 or 1222), but they aren’t always easy to use. When SQL Server 2005 was introduced, new events were added to the SQL Server 2005 Profiler (they are also in SQL Server 2008) that makes identifying deadlocks very easy. In this article, we learn how to use SQL Server 2005 Profiler to capture and analyze deadlocks.

 

A Brief Profiler Primer

If you are not already well versed in using SQL Server Profiler, I want to do a quick overview of how Profiler works. If you are an experienced Profiler user, you can skip this section and go right on to the next section.

 

SQL Server Profiler is a GUI front end for a feature in SQL Server called SQL Trace. Essentially, SQL Trace has the ability to internal SQL Server activity, allowing you to see what is happening inside your SQL Server, including deadlocks. In this article, we will be using the SQL Server Profiler GUI, although you can use Transact-SQL code to accomplish the same thing using SQL Trace.

 

To capture a SQL Server trace using SQL Server Profiler, you need to create a trace, which includes several basic steps:

 

1)      You first need to select the events you want to collect. Events are an occurrence of some activity inside SQL Server that Profiler can track, such as a deadlock or the execution of a Transact-SQL statement.

2)      Once you have selected the events you want to capture, the next step is to select which data columns you want to return. Each event has multiple data columns that can return data about the event. To minimize the impact of running Profiler against a production server, it is always a good idea to minimize the number of data columns returned.

3)      Because most SQL Servers have many different users running many different applications hitting many different databases on the same SQL Server instance, filters can be added to a trace to reduce the amount of trace data returned. For example, if you are only interested in finding deadlocks in one particular database, you can set a filter so that only deadlock events from that database are returned.

4)      If you like, you can choose to order the data columns you are returning, and you can even group or aggregate events to make it easier to analyze your trace results. While I do this for many of my traces, I usually don’t bother with this step when tracking down deadlock events.

5)      Once you have created the trace using the above steps, you are ready to run it. If you are using the SQL Server Profiler GUI, trace results are displayed in the GUI as they are captured. In addition, you can save the events you collect for later analysis.

 

Now that you know the basics, let’s begin creating a trace that will enable us to collect and analyze deadlocks.

 

Selecting Events

While there is only one event required to diagnose most deadlock problems, I like to include additional context events in my trace so that I have a better understanding of what is happening with the code. Context events are events that help put other events into perspective. The events I suggest you collect include:

·         Deadlock graph

·         Lock: Deadlock

·         Lock: Deadlock Chain

·         RPC:Completed

·         SP:StmtCompleted

·         SQL:BatchCompleted

·         SQL:BatchStarting

 

 

Figure 1: I like to include extra context events to help me better understand what is happening with the code.

 

Here’s a brief explanation of each of these events.

 

Deadlock Graph

Of seven events I have listed above, the only event you must have is the Deadlock Graph event. It captures, in both XML format and graphically, a drawing that shows you exactly the cause of the deadlock. We will examine how to interpret this drawing later in this article.

 

Lock:Deadlock

This event is fired whenever a deadlock occurs, and because of this, it is also fired every time the Deadlock Graph event is fired, producing redundant data. I have included it here because it makes it a little easier to see what is happening, but if you like, you can drop this event from your trace.

 

Lock:Deadlock Chain

This event is fired once for every process involved in a deadlock. In most cases, a deadlock only affects two processes at a time, and because of this, you will see this event fired twice just before the Deadlock Graph and the Lock:Deadlock events fire. In rare cases, more than two processes are involved in a deadlock, and if this is the case, an event will be fired for every process involved in the deadlock.

 

RPC: Completed
The RPC: Completed event fires after a stored procedure is executed as a remote procedure call. It includes useful information about the execution of the stored procedure, including the CPU time used to execute the stored procedure, the total length of time the stored procedure ran, logical reads and writes that occurred during its execution, along with the name of the stored procedure itself.

SP: StmtCompleted

Stored procedures are made up of one or more statements. In SQL Server 2005, each statement within a stored procedure is traced. The SP: StmtCompleted event indicates when a statement within a stored procedure has ended. The StmtCompleted event’s data columns provide lots of useful information about the statement, including the actual code in the statement, the duration the statement ran, the amount of CPU time used, the number of logical reads and writes, the number of rows returned by the statement, among others.

 

SQL: BatchStarting

The SQL: BatchStarting event is fired whenever a new batch begins. Once a batch begins, then one or more individual Transact-SQL statements occur. The SQL: BatchStarting event is a good event to easily see where a batch begins, but other than this, it is not particularly useful.

 

SQL: BatchCompleted

The SQL: BatchCompleted event occurs when a batch completes. This means that one or more Transact-SQL statements have completed for the batch. The SQL: BatchCompleted event is more useful than the SQL: BatchStarting event because it includes useful information like the duration of the entire batch, the logical number of reads and writes caused by all the statements inside the batch, the total number of rows returned by the batch, and other useful information.

 

Selecting Data Columns

You don’t need to select many data columns to capture the data you need to analyze deadlocks, but you can pick any that you find useful. At the very minimum, I select these data columns, and order them as they are ordered below.

 

·         Events

·         TextData

·         ApplicationName

·         DatabaseName

·         ServerName

·         SPID

·         LoginName

·         BinaryData

 

Selecting Column Organization

I don’t perform any grouping or aggregation when tracing Profiler events, but I generally order the data columns in a way that works best for me.

 

Running the Trace

One of the problems with troubleshooting deadlocks is that they are often hard to predict. Because of this, you may have to run your deadlock trace for a substantial amount of time (like 24 hours or more) in order to capture deadlocks when they occur. Ideally, you will only perform the trace during time periods where you know deadlocks are likely to occur, in order to minimize the impact of the trace on your server.

 

If you run a trace for 24 hours, many events may be captured, especially on a very busy production server. If this is the case, you may only want to capture the Deadlock Graph event, and no others, in order to reduce the load on the production server. As I mentioned earlier, the other events I list are context events and are not required to troubleshoot most deadlock problems.

 

Analyzing the Trace

Now that we know how to set up a trace to analyze deadlocking behavior, let’s look at an example to see what information is collected, and how we can best use it to identify the cause of a deadlock.

 

Figure 2: These are the results of capturing a deadlock using the events I have recommended.

 

To create a deadlock for demonstration purposes, I ran two separate transactions in two different processes that I know would create a deadlock. These are represented by the eight SQL:BatchStarting and SQL:BatchCompleted events at the beginning of the above trace.

 

When SQL Server determines that a deadlock has occurred, the first event that denotes this is the Lock:Deadlock Chain event. There are two of these in the above trace, for SPID 55 and SPID 54. Next, the Deadlock Graph event is fired, and last, the Lock:Deadlock event is fired.

 

Once SQL Server detects a deadlock, it picks a loser and a winner. The SQL:BatchCompleted event that immediately follows the Lock:Deadlock event is the transaction that is killed and rolled back, and the following SQL:BatchCompleted event is the event that was picked as the winner and successfully ran.

 

If you have trouble following the above example, don’t worry, as it will all make more sense when we take a close look at the Deadlock Graph event.

 

When I click on the Deadlock Graph event in Profiler, a deadlock graph appears at the bottom of the Profiler screen, as shown below.

 

Figure 3: The Deadlock Graph summarizes all the activity that caused the deadlock to occur.

 

Yes, I know you can’t read the graph just yet, but I wanted you to see the relationship between the top and bottom portions of the Profiler screen. There is more to this graph than appears obvious from a first look.

 

The left oval on the graph, with the blue cross, represents the transaction that was chosen as the deadlock victim by SQL Server. If you move the mouse pointer over the oval, a tooltip appears. This oval is also known as a Process Node as it represents a process that performs a specific task, such as an INSERT, UPDATE, or DELETE.

 

The right oval on the graph represents the transaction that was successful. If you move the mouse pointer over the oval also, a tooltip appears. This oval is also known as a Process Node.

The two rectangular boxes in the middle are called Resource Nodes, and they represent a database object, such as a table, row, or an index. These represent the two resources that the two processes were fighting over. In this case, both of these Resource Nodes represent indexes that each process was trying to get an exclusive lock on.

 

The arrows you see pointing from and to the ovals and rectangles are called Edges. An Edge represents a relationship between processes and resources. In this case, they represent types of locks each process has on each Resource Node.

 

Now that you have a basic understanding of the “big” picture, let’s drill down into the details. Let’s start by looking at each of the Resource Nodes, starting with the successful one, on the right side of our Deadlock Graph.

 

 

 

Figure 4: This transaction was selected as the winner

 

Before we discuss what this Resource Node is telling us, we first need to learn a few new terms. As you can see, there are a number of new terms listed inside the Resource Node.

 

·         Server Process ID: This is the SPID of the process.

·         Server Batch ID: This is the internal reference number for the batch this code is running in.

·         Execution Context ID: This is the internal reference number of the thread for the above SPID. A value of 0 represents the main, or parent thread.

·         Deadlock Priority: By default, no one transaction has a greater or smaller chance of becoming a deadlock victim than the other. However, if you use the SET DEADLOCK PRIORITY command for a particular session, then this session can be assigned a value of Low, Normal, or High; setting the priority of this session’s transaction over another session’s transaction. This allows the DBA or developer to control which session is more important that another when it comes to deadlocks. A value of 0 indicates no priority has been assigned to this process.

·         Log Used: This is the amount of log space used by the transaction up to the point the deadlock occurs. SQL Server uses this information to help it determine which transaction has used up the most resources so far, so that the transaction that has used the least resources is killed and rolled back, helping to minimize the amount of resources used to deal with the deadlock.

·         Owner ID: This is the internal reference number for the transaction that is occurring.

·         Transaction Descriptor: This is an internal reference number that indicates the state of the transaction.

 

As you can see, there is a lot of data provided, but it is not all that useful unless you have an intimate knowledge of the internal workings of SQL Server. What is more useful is the tooltip. It lists the exact Transact-SQL code that was executed to cause the deadlock to occur.

 

Now that we have the Process Node definitions down, let’s take a more detailed look at what figure 4 is telling us. First, we know it was the winning transaction because it does not have a blue cross through it. Second, it provides the exact Transact-SQL code that was running that caused the deadlock. This is extremely useful information because it allows us to trace the event to specific problematic code. Third, it tells us that this Process Node has an exclusive lock on the top Resource Node (the X represents an exclusive lock). And fourth, it tells us that it has requested another exclusive lock on the bottom Resource Node. When you look at this Process Node in isolation, this is not a big deal. The problem occurs when this transactions bumps heads with another transaction, as we find out next.

 

Figure 5: This transaction is the deadlock victim.

 

On the left side of the Deadlock graph (figure 5 above) is the other Process Node. Like the winning Process Node, this node tells us the following: First, this was the losing transaction. Second, it provides the Transact-SQL code that contributed to the deadlock. Third, it tells us that is had an exclusive lock on the bottom Resource Node. Fourth, it tells us that it requested an update lock on the top Resource node. We’ll talk more about the locking conflicts shortly, but for now, let’s look at the two Resource Nodes.

 

Figure 6: The resource nodes tell us what resources the transactions were fighting over.

 

Both of these Resource Nodes represent indexes, which the two transactions needed access to in order to perform their requested work. Like Process Nodes, Resource Nodes have some definitions we need to learn.

 

·         HoBt ID: This number refers to a subset of data/index pages within a single partition. These may be in the form of a heap or a B-Tree. In SQL Server 2005, the HoBt ID is identical to the Partition ID found in the sys.partitions table.

·         Associated Objid: This is the object ID of the table associated with this index.

·         Index Name:  The name of the index.

 

The most useful information is the name of the index, which may be useful information when deciding how to best reduce or eliminate the deadlocks in question.

 

The top Resource Node represents the PK_SalesTaxRate_SalesTaxRateID index and the bottom Resource Node represents the PK_SalesTerritory_TerritoryID index.

 

Now that we have discussed all the details of this Deadlock graph, let’s bring all the pieces together.

 

1.       SPID 54 started a transaction, then requested and received an Exclusive lock on the PK_SalesTaxRate_SalesTaxRateID index.

2.       SPID 55 started a transaction, and then requested an Exclusive lock on the PK_SalesTerritory_TerritoryID index.

3.       SPID 55, as part of the same transaction, then requested an Update lock on the PK_SalesTaxRate_SalesTaxRateID index. However, this lock was not granted because SPID 54 already had an Exclusive lock on the index. In most cases, this means that SPID 55 has to wait its turn before it can get an Update lock on PK_SalesTaxRate_SalesTaxRateID. At this point, SPID 54 is causing a blocking lock on SPID 55.

4.       As the above blocking lock is continuing, SPID 54 wants to complete its transaction. In step 1 above, it had only started the transaction, it had not completed it. Now, SPID 54 wants to complete the transaction. In order to do this, it must get an Exclusive lock on PK_SalesTerritory_TerritoryID. The problem is that it can’t get a lock on this index because SPID 55 already has an Exclusive lock on it. Now we have a deadlock. Neither SPID can continue because each transaction is locking out the other transaction from finishing. Because this is not a good thing, SQL Server looks at the two transactions and decides to kill the one that has used up the least amount of resources so far. In this case, SPID 55 has used up 220 units of the Log and SPID 54 has used 1612 units of the log. This indicates that SPID 55 should be killed because it has used fewer resources so far.

5.       SQL Server kills SPID 55 and the transactions is rolled back, which releases the Exclusive lock on PK_SalesTerritory_TerritoryID, now allowing SPID 54 to get an Exclusive lock on it and to complete the transaction.

 

You may have to read this section several times in order to grasp all the activity that I have just described. It is not particular easy to follow. However, once you grasp what the Deadlock Graph is telling you, you are now in a better position to identify the code and/or objects that are contributing to the deadlocking problem, allowing an opportunity to fix it. In most cases, this will require developers to get involved. Fortunately, you now have the information you need to share with the developers so they can remedy the problem.

 

Reducing Deadlocking Problems

Many different things can contribute to the cause of a deadlock in SQL Server. Below are some suggestions on how to eliminate, or at least mitigate, deadlocking problems in your applications. This list is only a starting point and should not be considered a complete list of the options you have to prevent or reduce deadlocking. You may want to share this list with your developers, along with the information you identified during your trace.

 

•              Ensure the database design is properly normalized.

•              Have the application access database objects in the same order every time.

•              Keep transactions as short as possible.

•              During transactions, don't allow any user input.

•              Avoid cursors.

•              Consider reducing lock escalation by using the ROWLOCK or PAGLOCK hint.

•              Consider using the NOLOCK hint to prevent locking.

•              Use as low a level of isolation as possible for user connections.

 

Summary

As you can see, Profiler can be a very powerful tool to help DBAs identify deadlocking problems. Creating a deadlock Profiler trace is simple to create and run. So if you are currently experiencing any deadlocking issues in your databases, take the time now to apply what you have learned in this article. The sooner you give this a try, the closer you will be to getting your deadlocking problems resolved.

posted by LifeisSimple
2013. 1. 14. 14:23 BookStory

SQL Server Tacklebox

By Rodney Landrum,

DBA Handbooks

SQL Server TackleboxSQL Server Tacklebox - Essential Tools and Scripts for the day-to-day DBA by Rodney Landrum


Download eBook (PDF): Free
Printed Book: $29.99
Download the ePub version: Free

For the day-to-day DBA, there are dangers around every corner; monsters waiting to strike down the unsuspecting database with a resource-hungry query, an inefficient data load, or even outright data corruption. When the worst happens, and SQL Server is down, or performance is suffering, customers are quick to complain.. During such difficult periods, you, the DBA, are the centre of attention. Suddenly, three or four people crowd your desk, laser beam focused on your every action, fully expecting you to solve the problem quickly.

The success of your career as a DBA depends largely on how well-prepared you are to deal with such situations. Without verified backups, sound server documentation, and a ready supply of tried-and tested troubleshooting tools, there is no safe place to hide when the monster rears up. All you can do is buy some time, patch things up as best you can and then duck away into your cubicle, which, if you lost any data in the process, may soon be empty.

However, with the tools and techniques provided in the SQL Server Tacklebox, you will be fully-equipped to fight back. Inside, you'll find scripts that will allow you to automate and standardize SQL Server installation, document and report on your servers, migrate data and manage data growth, troubleshoot performance issues, receive notifications of impending danger, secure access to your servers and fight off data corruption.

In short, you'll be able to respond quickly and professionally to the many issues that blight the waking, and often sleeping, hours of the DBA. Then, one day, who knows? That cubicle may turn into an executive office with wall-to-wall tinted windows revealing a flowing brook outside, where no monsters live.

View Related Books:


Editorial Review

For the day-to-day DBA, there are dangers around every corner; monsters waiting to strike down the unsuspecting database with a resource-hungry query, an inefficient data load, or even outright data corruption. When the worst happens, and SQL Server is down, or performance is suffering, customers are quick to complain.. During such difficult periods, you, the DBA, are the centre of attention. Suddenly, three or four people crowd your desk, laser beam focused on your every action, fully expecting you to solve the problem quickly.

The success of your career as a DBA depends largely on how well-prepared you are to deal with such situations. Without verified backups, sound server documentation, and a ready supply of tried-and tested troubleshooting tools, there is no safe place to hide when the monster rears up. All you can do is buy some time, patch things up as best you can and then duck away into your cubicle, which, if you lost any data in the process, may soon be empty.

However, with the tools and techniques provided in the SQL Server Tacklebox, you will be fully-equipped to fight back. Inside, you'll find scripts that will allow you to automate and standardize SQL Server installation, document and report on your servers, migrate data and manage data growth, troubleshoot performance issues, receive notifications of impending danger, secure access to your servers and fight off data corruption.

In short, you'll be able to respond quickly and professionally to the many issues that blight the waking, and often sleeping, hours of the DBA. Then, one day, who knows? That cubicle may turn into an executive office with wall-to-wall tinted windows revealing a flowing brook outside, where no monsters live.

About the Author

Rodney Landrum has been working with SQL Server technologies for longer than he can remember (he turned 40 in May of 2009 so his memory is going). He writes regularly about many SQL Server technologies, including Integration Services, Analysis Services, and Reporting Services. He has authored three books on Reporting Services. He is a regular contributor to SQL Server Magazine and Simple-Talk, the latter of which he sporadically blogs on about SQL and his plethora of geek tattoos. His day job finds him overseeing the health and well-being of a large SQL Server infrastructure in Pensacola, Florida. He swears he owns the expression "Working with Databases on a Day to Day Basis" and anyone who disagrees is itching to arm wrestle. Rodney is also a SQL Server MVP.

'BookStory' 카테고리의 다른 글

[독서] 완벽의 추구  (0) 2016.11.04
[독서] 혁신의 설계자  (0) 2016.11.04
사고싶은 책들~~~  (0) 2012.05.25
[도서] iSad ...  (0) 2011.10.07
[도서] 관심을 가지고 있는 도서  (0) 2011.09.20
posted by LifeisSimple