블로그 이미지
LifeisSimple

calendar

1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31

Notice

2013. 1. 23. 14:28 Brain Trainning/DataBase


sp_BLITZ™ – SQL Server Takeover Script

When people hand us a SQL Server and ask us to take it over, we need to do a fast health check.  We need to put on our latex gloves, do a little poking around, and discover the dirty secrets before they catch us by surprise.  That’s where our sp_Blitz™ script comes in: it helps you rapidly assess configuration, security, health, and performance issues.  We organize the results in a prioritized list and give you simple explanations of each risk, plus a column of links that you can copy/paste into your browser for more information about each issue.

Download the sp_Blitz™ code here, and then you can copy/paste it into SSMS.  Run it to create the stored procedure, and then just EXEC sp_Blitz to run the checks.  On larger servers, it may take a couple of minutes for some of the more detailed tests, like untrusted foreign keys, but this shouldn’t cause any blocking.

Here’s a 4-minute video explaining how to use it:

WHAT THE PARAMETERS DO

@CheckUserDatabaseObjects – if 1, we review the user databases looking for things like heaps and untrusted foreign keys.  If your databases have more than a few thousand objects, this may require additional processing time.

@CheckProcedureCache – if 1, we grab the top 20-50 resource-intensive plans from the cache and analyze them for common design issues.  We’re looking for missing indexes, implicit conversions, user-defined functions, and more.  This fast scan isn’t incredibly detailed – we’re just looking for queries that might surprise you and require some performance tuning.

@OutputProcedureCache – if 1, we output a second result set that includes the queries, plans, and metrics we analyzed.  You can do your own analysis on these queries too looking for more problems.

@CheckProcedureCacheFilter – can be CPU, Reads, Duration, ExecCount, or null.  If you specify one, we’ll focus the analysis on those types of resource-intensive queries (like the top 20 by CPU use.)  If you don’t, we analyze the top 20 for all four (CPU, logical reads, total runtime, and execution count).  Typically we find that it’s not 80 different queries – it’s usually 25-40 queries that dominate all of the metrics.

@OutputType – if ‘TABLE’, we dump out the findings in, well, a table.  If ‘COUNT’, a vampire comes to your door and – wait, I’m hearing that’s not quite right. If ‘COUNT’, we output a single result row with the number of problems found.  You can use this in conjunction with monitoring software to find out when somebody’s been sleeping in your bed.

MORE FUN LINKS

출처 : http://www.brentozar.com/blitz/

posted by LifeisSimple
2013. 1. 22. 17:32 Brain Trainning/DataBase


General SQL Server Performance Tuning Tips

BY BRAD MCGEHEE

When your transaction log grows large and you want a quick way to shrink it, try this option. Change the database recovery mode of the database you want to shrink from “full” to “simple,” then truncate the log file by performing a full backup of the database, then switch back to the “full” recovery mode. By temporally changing from the Full recovery model to the Simple recovery mode, and then back, SQL Server will only keep the “active” portion of the log, which is very small.  Contributed by Tom Kitta.  

*****

If you need to delete all the rows in a table, don’t use DELETE to delete them, as the DELETE statement is a logged operation and can take a significant amount of time, especially if the table is large. To perform the same task much faster, use the TRUNCATE TABLE instead, which is not a logged operation. Besides deleting all of the records in a table, this command will also reset the seed of any IDENTITY column back to its original value.

After you have run the TRUNCATE TABLE statement, it is important then to manually update the statistics on this table using UPDATE STATISTICS. This is because running TRUNCATE TABLE will not reset the statistics for the table, which means that as you add data to the table, the statistics for that table will be incorrect for a time period. Of course, if you wait long enough, and if you have Auto Update Statistics turned on for the database, then the statistics will eventually catch up with themselves. But this may not happen quickly, resulting in slowly performing queries because the Query Optimizer is using out-of-date statistics.  

*****

If you use TRUNCATE TABLE instead of DELETE to remove all of the rows of a table, TRUNCATE TABLE will not work when there are Foreign Key references present for that table. A workaround is to DROP the constraints before firing the TRUNCATE. Here’s a generic script that will drop all existing Foreign Key constraints on a specific table:

CREATE TABLE dropping_constraints 

cmd VARCHAR(8000) 
)

INSERT INTO dropping_constraints 
SELECT 
‘ALTER TABLE [' + 
t2.Table_Name + 
'] DROP CONSTRAINT ‘ + 
t1.Constraint_Name 
FROM 
INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS t1 
INNER JOIN 
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE t2 
ON 
t1.CONSTRAINT_NAME = t2.CONSTRAINT_NAME 
WHERE t2.TABLE_NAME=’your_tablename_goes_here’ 
DECLARE @stmt VARCHAR(8000) 
DECLARE @rowcnt INT 
SELECT TOP 1 @stmt=cmd FROM dropping_constraints 
SET @rowcnt=@@ROWCOUNT 
WHILE @rowcnt<>0 
BEGIN 
EXEC (@stmt) 
SET @stmt = ‘DELETE FROM dropping_constraints WHERE cmd =’+ QUOTENAME(@stmt,””) 
EXEC (@stmt) 
SELECT TOP 1 @stmt=cmd FROM dropping_constraints 
SET @rowcnt=@@ROWCOUNT 
END 
DROP TABLE dropping_constraints

The above code can also be extended to drop all FK constraints in the current database. To achieve this, just comment the WHERE clause.  

*****

Don’t run a screensaver on your production SQL Servers, it can unnecessarily use CPU cycles that should be going to your application. The only exception to this is the “blank screen” screensaver, which is OK to use.  

*****

Don’t run SQL Server on the same physical server that you are running Terminal Services, or Citrix software. Both Terminal Services and Citrix are huge resource hogs, and will significantly affect the performance of SQL Server. Running the administrative version of Terminal Services on a SQL Server physical server, on the other hand, is OK, and a good idea from a convenience point of view. As is mentioned in other parts of this website, ideally, SQL Server should run on a dedicated physical server. But if you have to share a SQL Server with another application, make sure it is not Terminal Services or Citrix.  

*****

Use sp_who or sp_who2 (sp_who2 is not documented in the SQL Server Books Online, but offers more details than sp_who) to provide locking and performance-related information about current connections to SQL Server. Sometimes, when SQL Server is very busy, you can’t use Enterprise Manager or Management Studio to view current connection activity via the GUI, but you can always use these two commands from Query Analyzer or Management Studio, even when SQL Server is very busy.  

*****

SQL Server uses its own internal thread scheduler (called the UMS) when running in either native thread mode or in fiber mode. By examining the UMS’s Scheduler Queue Length, you can help determine whether or not that the CPU or CPUs on the server are presenting a bottleneck.

This is similar to checking the System Object: Processor Queue Length in Performance Monitor. If you are not familiar with this counter, what this counter tells you is how many threads are waiting to be executed on the server. Generally, if there are more than two threads waiting to be executed on a server, then that server can be assumed to have a CPU bottleneck.

The advantage of using the UMS’s Schedule Queue Length over the System Object: Processor Queue Length is that it focuses strictly on SQL Server threads, not all of the threads running on a server.

To view what is going on inside the UMS, you can run the following undocumented command:

DBCC SQLPERF(UMSSTATS)

For every CPU in your server, you will get Scheduler. Each Scheduler will be identified with a number, starting with 0. So if you have four CPUs in your server, there will be four Schedulers listed after running the above command, Schedulers ID 0 through 3.

The “num users” tells you the number of SQL threads there are for a specific scheduler.

The “num runnable,” or better known as the “Scheduler Queue Length,” is the key indicator to watch. Generally, this number will be 0, which indicates that there are no SQL Server threads waiting to run. If this number is 2 or more, this indicates a possible CPU bottleneck on the server. Keep in mind that the values presented by this command are point data, which means that the values are only accurate for the split second when they were captured, and will be always changing. But if you run this command when the server is very busy, the results should be indicative of what is going on at that time. You may want to run this command multiple time to see what is going on over time.

The “num workers” refers to the actual number of worker threads there are in the thread pool.

The “idle workers” refers to the number of idle worker threads.

The “cntxt switches” refers to the number of context switches between runnable threads.

The “cntxt switches(idle)” refers to the number of context switches to “idle” threads.

As you can see, this command is for advanced users, and is just one of many tools that can be used to see internally how SQL Server is performing.  

Continues…

SQL Server  offers a system table-valued function that provides statistical information on the I/O activity of specific database files. It is called fn_virtualfilestats. What is interesting about this function is that it is specific to a database file. For example, if you want to see the I/O activity of a particular user database, you can. Keep in mind that all SQL Server databases have at least two physical files (for the database and log), and can have many more, depending on how the database was created. When using this function, you have to not only specify the database, but the file within the database. This is very specific I/O statistics indeed. To run this function, use the syntax found below:

SELECT * FROM :: fn_virtualfilestats(dbid, fileid)

Where:

dbid: The database ID of the database you want to examine.

fileid: The file ID of the physical files that make up your database. At a minimum, each database has at least two files: the database file (MDF file) and a log file (LDF). A database can have many files, and the file number refers to the number of the physical file that you want to examine.

To identify the dbid and fileid you want to use in the above statement, run the following SELECT statement. It will provide you with a list of all of the database names, database ids, and file ids (for each database) on your server.

SELECT sysdatabases.name AS Database_Name, 
     sysdatabases.dbid AS Database_ID, 
     sysaltfiles.fileid AS File_ID 
FROM sysaltfiles INNER JOIN sysdatabases 
     ON sysaltfiles.dbid = sysdatabases.dbid 
ORDER BY sysdatabases.name, sysaltfiles.fileid

Once you run this function, a wide variety of information is available, including:

NumberReads: The number of physical reads made against this file since the last time SQL Server was restarted.

NumberWrites: The number of physical writes made against this file since the last time SQL Server was restarted.

BytesRead: The number of bytes read from this file since the last time SQL Server was restarted.

BytesWritten: The number of writes to this file since the last time SQL Server was restarted.

IoStallMS: The total amount of time that users have waited for I/Os to complete for this file (in milliseconds).

The first four statistics can give you a feel for how busy a particular file is. This can come in handy when comparing multiple filegroups in a database and to see how balanced the I/O is to each file. To make the most of filegroups, I/O should be spread out among the various files for best overall performance. The last statistic, IoStallMS, is best used to find out if you have a bottleneck in your transaction log, as demonstrated below:

SELECT IoStallMS / (NumberReads+NumberWrites) as IsStall 
FROM :: fn_virtualfilestats(dbid, fileid)

Where:

dbid: The database ID of the database you want to examine.

fileid: The file ID of the transaction log of the database being examined.

Essentially, if IsStall is > 20ms, then this indicates that the I/O to the transaction log is becoming a bottleneck, which in turn can lead to major concurrently problems in the database.

To help optimize the transaction log, assuming it has become a bottleneck, consider doing the following:

• Place the transaction log on a faster disk array. 
• Place the transaction log on a dedicated disk array (no other files other than the transaction log). This allows sequential writes to occur as fast as possible, helping to boost I/O performance. 
• Turn on write-back caching on your disk controller, but only if it is backed up with a battery and has been tested to work with SQL Server. Normally, write-back caching is turned off because it can cause database corruption should the server crash. 
 

*****

Sometimes, a user thread has to wait until the resources it needs are available. Hopefully, this won’t happen often, but it is a fact of life. But sometimes, long waits can indicate potential performance problems that can be corrected, if you know where to look. Long waits can be caused by blocking locks, slow I/O, and other factors.

Fortunately, you can access the amount of time a user thread has to wait, which can tell you which user thread, if any, is taking more time that it should. For example, the query below can be run to identify any user threads that have to wait more than one second:

SELECT spid, waittime, lastwaittype 
FROM master..sysprocesses 
WHERE waittime > 1000

When you run the above query, all of the processes that have been waiting for greater than 1 second (1000 milliseconds) — both system and user — will be displayed. You will want to ignore system processes, focusing your efforts on spids that represent specific user threads. Spid will give you the user ID (you will have to match the spid to the appropriate user), waittime is the number of milliseconds that this user thread has been waiting, and lastwaittype will give you a clue as to what is waiting to occur.

In the above query, I have used 1000 milliseconds as a cutoff point, but you can use any amount of time you want. Generally speaking, any user thread that has been waiting for five seconds or more should definitely be evaluated for potential performance issues. Of course, you can choose any threshold you want.  

Continues…

By default, you cannot use a UNC (Universal Naming Convention) name to specify a location of where to store a SQL Server database or log file. Instead, you must specify a drive letter that refers to a local physical drive or array. But what if you want to store your database or log file on another Windows Server or a Network Appliance, Inc. storage system? You can, but you will have to set Trace Flag 1807 on your SQL Server to allow the use of UNC names.

*****

For a quick and dirty way to check to see if your SQL Server has maxed out its memory (and is causing your server to page), try this. Bring up the Task Manager and go to the “Performance” tab.

Here, check out two numbers: the “Total” under “Commit Charge (k)” and the “Total” under “Physical Memory (k)”. If the “Total” under “Commit Charge (k)” is greater than the “Total” under “Physical Memory (k)”, then your server does not have enough physical memory to run efficiently as it is currently configured and is most likely causing your server to page unnecessarily. Excess paging will slow down your server’s performance.

Another number to make note of is the “Available Physical Memory (K). This number should be 4MB or higher. If it is not, then your SQL Server is most likely suffering from a lack of physical RAM, hurting performance, and more RAM needs to be added.

If you notice this problem, you will probably want to use System Monitor to further investigate the cause of this problem. You will also want to check to see how much physical memory has been allocated to SQL Server. Most likely, this setting has been set incorrectly, and SQL Server has been set to use too much physical memory. Ideally, SQL Server should be set to allocate physical RAM dynamically.  

*****

When performance tuning a SQL Server, it is often handy to know if the disk I/O of your servers (and the databases on it) are mostly reads or mostly writes. This information can be used to calculate the ratio of writes to reads of your server, and this ratio can affect how you might want to tune your SQL Server. For example, if you find that your server is heavy on the writes, then you will want to avoid RAID 5 if you can, and use RAID 10 instead. This is because RAID 5 is much less efficient that RAID 10 at writes. But if your server has a much greater number of reads than writes, then perhaps a RAID 5 system is more than adequate.

One of the quickest ways to find out the ratio of reads to writes on your SQL Servers is to run Task Manager and look at the sqlservr.exe process (this is the mssqlserver or sqlserv service) and view the total number of I/O Read Bytes and I/O Write Bytes. If you don’t see this in Task Manager, go to View|Select Column, and add these two columns to Task Manager.

The results you see tell you how many bytes of data have been written and read from the SQL Server service since it was last restarted. Because of this, you don’t want to read this figure immediately after starting the SQL Server service, but after several days of typical use.

In one particular case I looked at, the SQL Server had 415,006,801,908 I/O bytes read and 204,669,746,458 bytes written. This server had about one write for every two reads. In this case, RAID 5 is probably a good compromise in performance, assuming that RAID 10 is not available from a budget perspective. But if the reverse were true, and there were two writes for every one read, then RAID 10 would be needed for best overall performance of SQL Server.  

*****

Internet Information Server (IIS) has the ability to send its log files directly to SQL Server for storage. Busy IIS servers can actually get bogged down trying to write log information directly to SQL Server. Because of this, it is generally not recommended to write web logging information to SQL Server. Instead, logs should be written to text files, and later imported into SQL Server using BCP, DTS, or SSIS.  

*****

SQL Server has a database compatibility mode that allows applications written for previous versions of SQL Server to run under the current version of SQL Server. In you want maximum performance for your database, you don’t want to run your database in compatibility mode (not all new performance-related features are supported).

Instead, your databases should be running in the current native SQL Server mode. Of course, this may require you to modify your application to make it compliant to the version of SQL Server you are running, but in most cases, the additional work required to update your application will be more than paid for with improved performance.  

*****

When experimenting with the tuning of your SQL Server, you may want to run the DBCC DROPCLEANBUFFERS command to remove all the test data from SQL Server’s data cache (buffer) between tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.

If you want to clear out all of the stored procedure cache, use this command, DBCC FREEPROCCACHE. If you only want to clear out the stored procedure cache for a single database (not the entire server) use DBCC FLUSHPROCINDB. All of these commands are for testing purposes and should not be run on a production SQL Server.  

*****

Orphan SQL Server sessions can negatively affect SQL Server’s performance. An orphan SQL Server session can occur when a client improperly disconnects from SQL Server, such as when the client loses power. When this happens, the client cannot tell SQL Server to properly close the connection, so the SQL Server connection remains open, even though it is not being used.

This can affect SQL Server’s performance two ways. First, orphaned connections use up SQL Server connections, which takes up server resources. Secondly, it is possible that the orphan connections may be holding locks that block other users; or temp tables or cursors may be held open that also take up unnecessary server resources.

The OS periodically checks for inactive SQL Server sessions, and if it finds any, it will notify SQL Server so that the connection can be removed. Unfortunately, this only occurs every 1-2 hours, depending on the protocol used. If orphaned SQL Server sessions become a problem, Windows Server’s registry can be modified so that it checks more often for orphaned connections.

Identifying an orphaned connection from SQL Server is very difficult, but if you can identify it, it can be removed by KILLing the connection using SSMS or by running the KILL statement.  

Continues…

For best performance, don’t mix production databases and development (test or staging) databases on the same physical server. This not only serves to better separate the two functions (production and development), but prevents developers from using up server resources that could be better used by production users.  

*****

When we think of performance, we usually think about speeding up our application’s performance. But another way to look at performance is to look at our performance as DBAs or Transact-SQL developers.

For example, one of the easiest ways to speed up our Transact-SQL coding, in addition to making it easier to maintain and troubleshoot our code once it is written, is to format it in an easy to read format.

While there are many different code formatting guidelines available, here are some basic ones you should consider following, if you aren’t doing so already:

Begin each line of your Transact-SQL code with a SQL verb, and capitalize all Transact-SQL statements and clauses, such as:

SELECT customer_number, customer_name 
FROM customer 
WHERE customer_number > 1000 
ORDER BY customer_number

If a line of Transact-SQL code is too long to fit onto one line, indent the following line(s), such as:

SELECT customer_number, customer_name, customer_address, 
     customer_state, customer_zip, customer_phonenumber

Separate logical groupings of Transact-SQL code by using appropriate comments and documentation explaining what each grouping goes.

These are just a few of the many possible guidelines you can follow when writing your Transact-SQL code to make it more readable by you and others. You just need to decide on some standard, and then always follow it in your coding. If you do this, you will definitely boost your coding performance.

*****

Be wary of allowing users to directly access your databases (especially OLTP databases) with third-party database access tools, such as Microsoft Excel or Access. Many of these tools can wreck havoc with your database’s performance. Here are some reasons why:

• Often these users aren’t experienced using these tools, and create overly complex queries that eat up server resources. At the other extreme, their queries may not be complex enough (such as lacking effective WHERE clauses) and return thousands, if not millions, or unnecessary rows of data. 
• This reporting activity can often lock rows, pages or tables, creating user contention for data and reducing the database’s performance. 
• These tools are often file-based. This means that even if an effective query is written, the entire table (or multiple tables in the case of joins) has to be returned to the client software where the query is actually performed, not at the server. Not only can this lead to excess server activity, it can also play havoc with your network.

If you have no choice but to allow users access to your data, try to avoid them hitting your production OLTP databases. Instead, point them to a “reporting” server that has been replicated, or is in the form of a datamart or data warehouse.  

*****

SQL Server   offers support of SSL encryption between clients and the server. While selecting this option prevents the data from being viewed, it also adds additional overhead and reduces performance. Only use SSL encryption if absolutely required. If you need to use SSL encryption, consider purchasing a SSL encryption processor for the server to speed performance.  

*****

SQL Server   supports named instances of SQL Server. You can run up to 16 concurrent instances of SQL Server   on the same server.

As you might imagine, each running instance of SQL Server takes up server resources. Although some resources are shared by multiple running instances, such as MSDTC and the Microsoft Search services, most are not. Because of this, each additional instance of SQL Server running on the same server has to fight for available resources, hurting performance.

For best performance, run only a single instance (usually the default) on a single physical server. The main reasons for using named instances is for upgrading older versions of SQL Server to newer versions of SQL Server, transition periods where you need to test your applications on multiple versions of SQL Server, and for use on development servers.  

*****

If you run the ALTER TABLE DROP COLUMN statement to drop a variable length or text column, did you know that SQL Server will not automatically reclaim this space after performing this action? To reclaim this space, which will help to reduce unnecessary I/O due to the wasted space, you can run the following command:

DBCC CLEANTABLE (database_name, table_name)

Before running this command, you will want to read about it in Books Online to learn about some of its options that may be important to you.

*****

Trace flags, which are used to enable and disable some special database functions temporarily, can sometimes chew up CPU utilization and other resources on your SQL Server unnecessarily. If you just use them for a short time to help diagnose a problem, for example, and then turn them off as soon as you are done using them, then the performance hit you experience is small and temporary.

What happens sometimes is that you, or another DBA, turns on a trace flag, but forgets to turn it off. This of course, can negatively affect your SQL Server’s performance. If you want to check to see if there are any trace flags turned on a SQL Server, run this command in Query Analyzer:

DBCC TRACESTATUS(-1)

If there are any trace flags on, you will see them listed on the screen after running this command. DBCC TRACESTATUS only finds traces created at the client (connection) level. If a trace has been turned on for an entire server, this will not show up.

If you find any, you can turn them off using this command:

DBCC TRACEOFF(number of trace)

*****

SQL Server offers a feature called the black box. When enabled, the black box creates a trace file of the last 128K worth of queries and exception errors. This can be a great tool for troubleshooting some SQL Server problems, such as crashes.

Unfortunately, this feature uses up SQL Server resources to maintain the trace file than can negatively affect its performance. Generally, you will only want to turn the black box on when troubleshooting, and turn it off during normal production. This way, your SQL Server will be minimally affected.  

Continues…

If you have ever performed a SELECT COUNT(*) on a very large table, you know how long it can take. For example, when I ran the following command on a large table I manage:

SELECT COUNT(*) from <table_name>

It took 1:09 to count 10,725,948 rows in the table. At the same time, SQL Server had to perform a lot of logical and physical I/O in order to perform the count, chewing up important SQL Server resources.

A much faster, and more efficient, way of counting rows in a table is to run the following query:

SELECT rows 
FROM sysindexes 
WHERE id = OBJECT_ID(‘<table_name>’) AND indid < 2

When I run the query against the same table, it takes less than a second to run, and it gave me the same results. Not a bad improvement and it took virtually no I/O. This is because the row count of your tables is stored in the sysindexes system table of your database. So instead of counting rows when you need to, just look up the row count in the sysindexes table.

There is one potential downside to using the sysindexes table. And that this system table is not updated in real time, so it might underestimate the number of rows you actually have. Assuming you have the database option turned on to “Auto Create Statistics” and “Auto Update Statistics,” the value you get should be very close to being correct, if not correct. If you can live with a very close estimate, then this is the best way to count rows in your tables.

*****

Do you use Enterprise Manager to access remote servers, possibly over a slow WAN link? If you do, have you ever had any problems getting SSMS to connect to the remote server?

If so, the problem may lay in the fact that if Enterprise Manager cannot make a connection within 4 seconds, then the connection attempt fails. To overcome slow network connections, you can change the default SSMS timeout value from 4 seconds to any amount of time you like.

To change the default timeout value, select Tools > Options from Enterprise Manager, and then select   the “Advanced” tab. Here, change the “Login time-out (seconds)” option to a higher number.  

*****

SQLDIAG.exe is a command line tools that collects information about SQL Server and writes it to a text file. It can be useful for documenting or troubleshooting your SQL Server. When you run this command when SQL Server is running, the following information is collected and stored in a text file called sqldiag.txt, which is stored in the mssqllog folder.

The text of all error logs 
SQL Server registry information 
SQL Server dll version information 
The output from these system stored procedures:

sp_configure 
sp_who 
sp_lock 
sp_helpdb 
xp_msver 
sp_helpextendedproc 
sysprocesses 
Input buffer SPIDs/deadlock information 
Microsoft diagnostics report for the server 
The last 100 queries and exceptions (if the query history trace was running) 
 

*****

Memory leaks can steal valuable memory from your SQL Server, reducing performance, and perhaps even forcing you to reboot your server. A memory leak occurs when a poorly-written or buggy program requests memory from the operating system, but does not release the memory when it is done with it. Because of this, the application can use up more and more memory in a server, greatly slowing it down, and even perhaps crashing the server.

Some memory leaks come from the operating system itself, device drivers, MDAC components, and even SQL Server. And of course, virtually any application can cause a memory leak, which is another good reason to dedicate a single server to SQL Server instead of sharing it among multiple applications.

Memory leaks are often hard to identify, especially if they leak memory slowly. Generally, memory leaks become apparent when you notice that your server is running out of available memory and paging becomes a big problem. A symptom of this is a SQL Server that runs quickly after being rebooted, but begins to run more and more slowly as time passes, and when the system is rebooted again, it speeds up again.

One way to help get rid of many memory leaks is to ensure that you always have the latest service packs or updates for your server’s software. But a memory leak you find may not have an immediate fix. If this is the case, you may be forced to reboot your server periodically in order to free up memory.

Identifying what is causing a memory leak is often difficult. One method involved using System Monitor to monitor all of the counters in the Memory object over time, seeing what is happening internally in your computer. Another method is to use Task Manager to view how much memory is used by each process. A process that seems to be using an unusual amount of memory may be the culprit.

*****

While SSMS can make some DBA and developer tasks a little bit easier, in other cases, it can cause performance problems. For example, if you decide to change the schema of a table from within SSMS, you may find out that doing so takes much longer than performing the same schema change by using the Transact-SQL ALTER command. This is because SSMS  may not use ALTER, but instead recreate an entire new table based on your new schema, and the move the data from the old table to the new table, which can take a long time for large tables. Generally, using ALTER is very efficient and schema changes occur quickly, even on large tables.

Because of this, consider using ALTER to modify the schemas of very large tables instead of SSMS.  

*****

If you want to see a detailed accounting of how memory is being used in your SQL Server, run this undocumented DBCC command:

DBCC MEMORYSTATUS

The results look similar to this:

Buffer Distribution          Value 
—————————— ———– 
Stolen                         2914 
Free                         3049 
Procedures                    43521 
Inram                         0 
Dirty                         8366 
Kept                         1209 
I/O                         0 
Latched                    0 
Other                         163981


Buffer Counts               Value 
—————————— ———– 
Commited                    223040 
Target                         223040 
Hashed                         173556 
InternalReservation          202 
ExternalReservation          0 
Min Free                    201


Procedure Cache               Value 
—————————— ———– 
TotalProcs                    28301 
TotalPages                    43521 
InUsePages                    19676


Dynamic Memory Manager     Value 
—————————— ———– 
Stolen                         46436 
OS                         507 
General                    7137 
QueryPlan                    39310 
Optimizer                    0 
Utilities                    13 
Connection                    483


Global Memory Objects          Value 
—————————— ———– 
Resource                    932 
Locks                         78 
XDES                         1 
SQLCache                    1195 
Replication                    2 
LockBytes                    2 
ServerGlobal               71


Query Memory Manager          Value 
—————————— ———– 
Grants                         0 
Waiting                    0 
Available                    150336 
Maximum                    150336


Optimization Queue          Value 
—————————— ———– 
Optimizing                    0 
Waiting                    0 
Available                    16 
Maximum                    16

The value refers to the number of 8K buffers. [7.0, 2000] Updated 5-7-2007

Continues…

If you would like to see what is stored in SQL Server’s stack, you can by running this undocumented SQL Server DBCC command:

DBCC STACKDUMP

When you run this command, a stack dump will be performed, writing a file named SQL00001.dmp in the SQL Server log folder. This is a text file you can view with Notepad.

*****

Many times it is useful to compare the performance of a query or stored procedure as you are testing it. This is especially true if the time differences are so small that it is hard for you to know for sure which query was faster or slower.

One of the easiest ways to measure the time it takes for a query to stored procedure to run is to find the starting and ending times, then find the difference between the two. This can be easily accomplished using a few extra lines of Transact-SQL code. Here’s how:

DECLARE @starttime datetime 
SELECT @starttime = GETDATE()

Your code here …

SELECT DATEDIFF(ms, @starttime, GETDATE()) AS ‘Total Time (ms)’

The results will be how long, in milliseconds, it took your code to run. SQL Server’s time function is only accurate to +- 3 milliseconds, so keep this mind.

*****

In SSMS, the most common way to script currently existing objects is to right-click on the database in question and select “All Tasks,” then choose “Generate SQL Script,” and so on, until you script the object you want. This works fine, but takes a lot of keystrokes.

Another way, from within SSMS, to script existing objects is to follow these steps instead:

• Open the database and right-click on the object you want to script, then click “Copy.” Or if you like keyboard shortcuts, press CTRL-C. 
• Open a copy of Query Analyzer, Notepad, or any text editor. 
• Now on the “Edit” menu from the application, click “Paste” (or press CTRL-V), and the scripted code appears, including any appropriate constraints.

This is a quick and easy way to script database objects you may find handy.

*****

Have you ever wondered how people find out about “undocumented” stored procedures? Have you ever wondered about how SQL Server works behind the scenes? And have you ever wanted to see some examples of how top-notch Transact-SQL developers write SQL code?

All of the above is at your fingertips, although you won’t see many references to it in Microsoft’s documentation. To accomplish all of the above, all you have to do is go to your master SQL Server database in SSMS, open the Stored Procedures Folder, and take a look. All SQL Server stored procedures can be seen here, even the ones not documented by Microsoft. To view the Transact-SQL code in a stored procedure, right-click on the stored procedure you are interested in and click Properties.

If you want to use Query Analyzer instead, all you have to do is to use this command:

USE master 
sp_helptext [system_stored_procedure_name]

In many cases, the Transact-SQL code you see is documented (although generally not in great detail), and if you know Transact-SQL, you should be able to follow the code and understand how it works. In some cases, you may need to experiment a little to better understand what is going on.

For example, here’s the Transact-SQL code for the sp_updatestats system stored procedure:

CREATE PROCEDURE sp_updatestats 
AS

DECLARE @dbsid varbinary(85)

SELECT @dbsid = sid 
FROM master.dbo.sysdatabases 
WHERE name = db_name()

/*Check the user sysadmin*/ 
IF NOT is_srvrolemember(‘sysadmin’) = 1 AND suser_sid() <> @dbsid 
BEGIN 
RAISERROR(15288,-1,-1,’SA or DBO’,'sp_updatestats’) 
RETURN (1) 
END

DECLARE @exec_stmt nvarchar(540) 
DECLARE @tablename sysname 
DECLARE @uid smallint 
DECLARE @user_name sysname 
DECLARE @tablename_header varchar(267) 
DECLARE tnames_cursor CURSOR FOR SELECT name, uid FROM sysobjects WHERE type = ‘U’ 
OPEN tnames_cursor 
FETCH NEXT FROM tnames_cursor INTO @tablename, @uid 
WHILE (@@fetch_status <> -1) 
BEGIN 
IF (@@fetch_status <> -2) 
BEGIN 
SELECT @user_name = user_name(@uid) 
SELECT @tablename_header = ‘Updating ‘ + @user_name +’.'+ RTRIM(LOWER(@tablename)) 
PRINT @tablename_header 
SELECT @exec_stmt = ‘UPDATE STATISTICS ‘ + quotename( @user_name , ‘[‘)+’.’ + quotename( @tablename, ‘[‘) 
EXEC (@exec_stmt) 
END 
FETCH NEXT FROM tnames_cursor INTO @tablename, @uid 
END 
PRINT ‘ ‘ 
PRINT ‘ ‘ 
raiserror(15005,-1,-1) 
DEALLOCATE tnames_cursor 
RETURN(0) — sp_updatestats

GO

Besides learning how system stored procedures work, and getting an opportunity to see professionally written code, (if you really like to get your hands dirty) you can also modify the code to do something slightly different. By this, I don’t mean to modify the code you see (that wouldn’t be prudent) but you can copy the code to a new stored procedure, modify it, and then save it with a different name.  

Continues…

On very rare occasions, you may get an error similar to this from SQL Server:

[Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 845: [Microsoft][ODBC SQL Server Driver][SQL Server]Time-out occurred while waiting for buffer latch type 3 for page (2:0), database ID 15.

This error message may also be accompanied by additional error messages, such as a file log or database backup that failed.

This error indicates that your disk I/O system is being overloaded and that SQL Server timed out when trying to write data to disk (via the OS). If this error is a rare event, then there is no need to worry.

But if this event becomes frequent, it could mean that your disk I/O subsystem has hit a bottleneck that it can’t overcome, or that your disk I/O subsystem itself is having problems. You need to find out which one of these is the most likely issue, and then deal with it accordingly.

One of the things that you might want to look for is what I/O activity was happening at the time of the error. For example, it is possible that a disk backup, a DBCC command, or other database activity was all happening at the same time, overwhelming the disk I/O. Perhaps the solution to the problem is as simple as rescheduling some scheduled tasks.  

*****

According to Microsoft, the way to shrink a log file is to use DBCC SHRINKFILE. This process is painfully slow and inefficient. Even after running the procedure numerous times, you may not get much free space.

Here’s another way to shrink your logs that is guaranteed to work every time. First, back up the database and then detach (sp_detach_db) it (you will have to bring the database down to do this). Next, delete the log file and then re-attach (sp_attach_db) the database, not providing the old log file location from within the sp_attach_db command. This will create a new log file with the old log file name in the old location with default size i.e. 512 KB.

To make sure there is no problem during the operation, the old log file can be renamed and kept until the database is reattached successfully. This provides a backup plan if for some reason SQL server fails to attach the database without the old log file.

This trick won’t work if the database has more than one log file, but if you need to, you can alter the database so that it only has a single log file, while will allow you to perform the above steps. After carrying out the detach and attach database activity, the database can be again altered to add more log files.   Contributed by Gaurav Bindlish.  

*****

In SQL Server’s Query Analyzer, a somewhat hidden, but useful feature exists that you may not be aware off, and that is the Transact-SQL Debugger. The reasons you may not have seen it before in Query Analyzer is that you don’t see this option in any standard menu, making it somewhat hard to find.

To use the Transact-SQL Debugger, you must first ensure that the Object Browser is displayed from within Query Analyzer. Next, using the Object Browser, open up a database that has one or more stored procedures, then right-click on the stored procedure’s name, and then select Debug. This is how you turn on debugging for a stored procedure.

Once you click on Debug, if the stored procedure you selected has one or more input parameters, you will be asked to enter them, and then you can begin the debugging process. The Transact-SQL Debugger has the typical features you find in most any debugging tool, including:

• Go: Puts the stored procedure into the debugging mode. 
• Toggle Breakpoint: Sets or removes a breakpoint at the current line in your code. You cannot set a breakpoint on lines containing non-executable code. 
• Remove All Breakpoints: Clears all the breakpoints in your code. 
• Step Into: Step Into executes the statement at the current execution point, and then executes one code statement at a time. If the statement is a call to another stored procedure, then the next statement displayed is the first statement in the stored procedure. 
• Step Over: Executes one code statement at a time. If the current statement contains a call to another stored procedure, Step Over executes the stored procedure as a unit, and then steps to the next statement in the current stored procedure. 
• Step Out: Executes the remaining lines of code in which the current execution point lies. The next statement displayed is the statement following the procedure call. All of the code is executed between the current and the final execution points. 
• Run to Cursor: Specifies a statement further down in your code where you want execution to end. 
• Restart: Restarts execution at the beginning of the stored procedure. 
• Stop Debugging: Stops debugging. 
• Watch: Displays the current watch expressions. 
• Callstack: Lists the procedures calls that have started, but have yet to be completed. 
• Auto Rollback: Automatically rolls back all work performed during execution of the stored procedure.

*****

Have you ever wanted to test the latest beta software, but didn’t want to put it on your “production” desktop because you were afraid it might crash it, and didn’t have a separate test box available to run it on? Or have you wanted more than one test environment, but didn’t have the necessary hardware?

In both of these cases, there is a new option that allows you to do both, while not requiring any more hardware. How? By using Microsoft Virtual PC 2007. This software, which is designed to run on most desktops and Microsoft operating systems, allows you to install virtual servers on your current desktop (assuming your hardware is big enough).

For example, if you are running Windows XP Workstation on your desktop, you can create one or more multiple virtual servers that run almost any other Microsoft operating system, along with your choice of application software, such as SQL Server running under Windows Server, or any combination you like.

Virtual servers don’t have to be active all the time, you can bring them up only when you want to, helping to preserve hardware resources. You can create as many virtual servers as you like, plus you can have as many running as you like, assuming your hardware will handle them.

If you decide to do this, there are two things you may want to keep in mind. First, when you create a virtual server, you must specify how the networking will work. There are several options. You may have to experiment with these options to select the one that works best for you. Also, you will want to install the optional Virtual Machine Additions into your virtual server, after it is built. This will greatly enhance the performance of the virtual software. Information on how to do this is available in the instructions.

Microsoft also offers a “server” version.  

*****

To quickly identify which, if any, of your procedure is executed when SQL Server starts, you can use the following query:

USE master 
GO 
SELECT 
name 
FROM 
sysobjects 
WHERE 
OBJECTPROPERTY(id, ‘ExecIsStartup’) = 1

However, you can also get the same result via the INFORMATION_SCHEMA views, which are generally preferable over directly querying the system tables. Here’s the query:

USE master 
GO 
SELECT 
ROUTINE_NAME 
FROM 
INFORMATION_SCHEMA.ROUTINES 
WHERE 
OBJECTPROPERTY(OBJECT_ID(ROUTINE_NAME), ‘ExecIsStartUp’) = 1

출처 : http://www.sql-server-performance.com/2007/gen-tips/7/

posted by LifeisSimple
2013. 1. 18. 17:40 Brain Trainning/DataBase

출처 : http://www.simple-talk.com/sql/learn-sql-server/how-to-track-down-deadlocks-using-sql-server-2005-profiler/


Brad McGehee

How to Track Down Deadlocks Using SQL Server 2005 Profiler
20 June 2008

It is irritating, sometimes alarming, for the user to be confronted by the 'deadlock message' when a deadlock happens. It can be a tiresome business to prevent them from occurring in applications. Fortunately, the profiler can be used to help DBAs identify how deadlocking problems happen, and indicate the best way of minimising the likelihood of their reappearance.

A Quick Introduction to Deadlocks

Deadlocking occurs when two or more SQL Server processes have locks on separate database objects and each process is trying to acquire a lock on an object that the other processes have previously locked.  For example, process one has an exclusive lock on object one, process two has an exclusive lock on object two, and process one also wants an exclusive lock on object two, and object two wants an exclusive lock on object one. Because two processes can’t have an exclusive lock on the same object at the same time, the two processes become entangled in a deadlock, with neither process willing to yield of its own accord.

 

Since a deadlock is not a good thing for an application, SQL Server is smart enough to identify the problem and ends the deadlock by choosing one process over another. It does this by killing one of the processes (usually the process that has used the least amount of server resources up to this point) and lets the other one to continue to run. The aborted transaction is rolled back and an error message is sent to the application. If the application is deadlock aware, it will resubmit the killed transaction automatically and the user may never know the deadlock happened. If the application is not deadlock aware, then most likely an error message appears on the application’s screen and you get a call from a disgruntled user. Besides irritating users, deadlocks can use up SQL Server's resources unnecessarily as transactions are killed, rolled back, and resubmitted again.

 

Deadlocks have been the bane of many a DBA. While rare for a well-designed and written application, deadlocks can be a major problem for—how can I say this delicately?—“less efficient” application code. What is even more frustrating is there is not much the DBA can to do prevent deadlocks, as the burden of preventing them in the first place is on the developers of the application. Once an application is designed and written, it is hard for the DBA to do anything other than to identify the offending code and reporting it back to the developers so it can be fixed.

 

In SQL Server 2000 and earlier, the most common way to track down deadlock issues was to use a trace flag. In SQL Server 2005, trace flags can still be used (1204 or 1222), but they aren’t always easy to use. When SQL Server 2005 was introduced, new events were added to the SQL Server 2005 Profiler (they are also in SQL Server 2008) that makes identifying deadlocks very easy. In this article, we learn how to use SQL Server 2005 Profiler to capture and analyze deadlocks.

 

A Brief Profiler Primer

If you are not already well versed in using SQL Server Profiler, I want to do a quick overview of how Profiler works. If you are an experienced Profiler user, you can skip this section and go right on to the next section.

 

SQL Server Profiler is a GUI front end for a feature in SQL Server called SQL Trace. Essentially, SQL Trace has the ability to internal SQL Server activity, allowing you to see what is happening inside your SQL Server, including deadlocks. In this article, we will be using the SQL Server Profiler GUI, although you can use Transact-SQL code to accomplish the same thing using SQL Trace.

 

To capture a SQL Server trace using SQL Server Profiler, you need to create a trace, which includes several basic steps:

 

1)      You first need to select the events you want to collect. Events are an occurrence of some activity inside SQL Server that Profiler can track, such as a deadlock or the execution of a Transact-SQL statement.

2)      Once you have selected the events you want to capture, the next step is to select which data columns you want to return. Each event has multiple data columns that can return data about the event. To minimize the impact of running Profiler against a production server, it is always a good idea to minimize the number of data columns returned.

3)      Because most SQL Servers have many different users running many different applications hitting many different databases on the same SQL Server instance, filters can be added to a trace to reduce the amount of trace data returned. For example, if you are only interested in finding deadlocks in one particular database, you can set a filter so that only deadlock events from that database are returned.

4)      If you like, you can choose to order the data columns you are returning, and you can even group or aggregate events to make it easier to analyze your trace results. While I do this for many of my traces, I usually don’t bother with this step when tracking down deadlock events.

5)      Once you have created the trace using the above steps, you are ready to run it. If you are using the SQL Server Profiler GUI, trace results are displayed in the GUI as they are captured. In addition, you can save the events you collect for later analysis.

 

Now that you know the basics, let’s begin creating a trace that will enable us to collect and analyze deadlocks.

 

Selecting Events

While there is only one event required to diagnose most deadlock problems, I like to include additional context events in my trace so that I have a better understanding of what is happening with the code. Context events are events that help put other events into perspective. The events I suggest you collect include:

·         Deadlock graph

·         Lock: Deadlock

·         Lock: Deadlock Chain

·         RPC:Completed

·         SP:StmtCompleted

·         SQL:BatchCompleted

·         SQL:BatchStarting

 

 

Figure 1: I like to include extra context events to help me better understand what is happening with the code.

 

Here’s a brief explanation of each of these events.

 

Deadlock Graph

Of seven events I have listed above, the only event you must have is the Deadlock Graph event. It captures, in both XML format and graphically, a drawing that shows you exactly the cause of the deadlock. We will examine how to interpret this drawing later in this article.

 

Lock:Deadlock

This event is fired whenever a deadlock occurs, and because of this, it is also fired every time the Deadlock Graph event is fired, producing redundant data. I have included it here because it makes it a little easier to see what is happening, but if you like, you can drop this event from your trace.

 

Lock:Deadlock Chain

This event is fired once for every process involved in a deadlock. In most cases, a deadlock only affects two processes at a time, and because of this, you will see this event fired twice just before the Deadlock Graph and the Lock:Deadlock events fire. In rare cases, more than two processes are involved in a deadlock, and if this is the case, an event will be fired for every process involved in the deadlock.

 

RPC: Completed
The RPC: Completed event fires after a stored procedure is executed as a remote procedure call. It includes useful information about the execution of the stored procedure, including the CPU time used to execute the stored procedure, the total length of time the stored procedure ran, logical reads and writes that occurred during its execution, along with the name of the stored procedure itself.

SP: StmtCompleted

Stored procedures are made up of one or more statements. In SQL Server 2005, each statement within a stored procedure is traced. The SP: StmtCompleted event indicates when a statement within a stored procedure has ended. The StmtCompleted event’s data columns provide lots of useful information about the statement, including the actual code in the statement, the duration the statement ran, the amount of CPU time used, the number of logical reads and writes, the number of rows returned by the statement, among others.

 

SQL: BatchStarting

The SQL: BatchStarting event is fired whenever a new batch begins. Once a batch begins, then one or more individual Transact-SQL statements occur. The SQL: BatchStarting event is a good event to easily see where a batch begins, but other than this, it is not particularly useful.

 

SQL: BatchCompleted

The SQL: BatchCompleted event occurs when a batch completes. This means that one or more Transact-SQL statements have completed for the batch. The SQL: BatchCompleted event is more useful than the SQL: BatchStarting event because it includes useful information like the duration of the entire batch, the logical number of reads and writes caused by all the statements inside the batch, the total number of rows returned by the batch, and other useful information.

 

Selecting Data Columns

You don’t need to select many data columns to capture the data you need to analyze deadlocks, but you can pick any that you find useful. At the very minimum, I select these data columns, and order them as they are ordered below.

 

·         Events

·         TextData

·         ApplicationName

·         DatabaseName

·         ServerName

·         SPID

·         LoginName

·         BinaryData

 

Selecting Column Organization

I don’t perform any grouping or aggregation when tracing Profiler events, but I generally order the data columns in a way that works best for me.

 

Running the Trace

One of the problems with troubleshooting deadlocks is that they are often hard to predict. Because of this, you may have to run your deadlock trace for a substantial amount of time (like 24 hours or more) in order to capture deadlocks when they occur. Ideally, you will only perform the trace during time periods where you know deadlocks are likely to occur, in order to minimize the impact of the trace on your server.

 

If you run a trace for 24 hours, many events may be captured, especially on a very busy production server. If this is the case, you may only want to capture the Deadlock Graph event, and no others, in order to reduce the load on the production server. As I mentioned earlier, the other events I list are context events and are not required to troubleshoot most deadlock problems.

 

Analyzing the Trace

Now that we know how to set up a trace to analyze deadlocking behavior, let’s look at an example to see what information is collected, and how we can best use it to identify the cause of a deadlock.

 

Figure 2: These are the results of capturing a deadlock using the events I have recommended.

 

To create a deadlock for demonstration purposes, I ran two separate transactions in two different processes that I know would create a deadlock. These are represented by the eight SQL:BatchStarting and SQL:BatchCompleted events at the beginning of the above trace.

 

When SQL Server determines that a deadlock has occurred, the first event that denotes this is the Lock:Deadlock Chain event. There are two of these in the above trace, for SPID 55 and SPID 54. Next, the Deadlock Graph event is fired, and last, the Lock:Deadlock event is fired.

 

Once SQL Server detects a deadlock, it picks a loser and a winner. The SQL:BatchCompleted event that immediately follows the Lock:Deadlock event is the transaction that is killed and rolled back, and the following SQL:BatchCompleted event is the event that was picked as the winner and successfully ran.

 

If you have trouble following the above example, don’t worry, as it will all make more sense when we take a close look at the Deadlock Graph event.

 

When I click on the Deadlock Graph event in Profiler, a deadlock graph appears at the bottom of the Profiler screen, as shown below.

 

Figure 3: The Deadlock Graph summarizes all the activity that caused the deadlock to occur.

 

Yes, I know you can’t read the graph just yet, but I wanted you to see the relationship between the top and bottom portions of the Profiler screen. There is more to this graph than appears obvious from a first look.

 

The left oval on the graph, with the blue cross, represents the transaction that was chosen as the deadlock victim by SQL Server. If you move the mouse pointer over the oval, a tooltip appears. This oval is also known as a Process Node as it represents a process that performs a specific task, such as an INSERT, UPDATE, or DELETE.

 

The right oval on the graph represents the transaction that was successful. If you move the mouse pointer over the oval also, a tooltip appears. This oval is also known as a Process Node.

The two rectangular boxes in the middle are called Resource Nodes, and they represent a database object, such as a table, row, or an index. These represent the two resources that the two processes were fighting over. In this case, both of these Resource Nodes represent indexes that each process was trying to get an exclusive lock on.

 

The arrows you see pointing from and to the ovals and rectangles are called Edges. An Edge represents a relationship between processes and resources. In this case, they represent types of locks each process has on each Resource Node.

 

Now that you have a basic understanding of the “big” picture, let’s drill down into the details. Let’s start by looking at each of the Resource Nodes, starting with the successful one, on the right side of our Deadlock Graph.

 

 

 

Figure 4: This transaction was selected as the winner

 

Before we discuss what this Resource Node is telling us, we first need to learn a few new terms. As you can see, there are a number of new terms listed inside the Resource Node.

 

·         Server Process ID: This is the SPID of the process.

·         Server Batch ID: This is the internal reference number for the batch this code is running in.

·         Execution Context ID: This is the internal reference number of the thread for the above SPID. A value of 0 represents the main, or parent thread.

·         Deadlock Priority: By default, no one transaction has a greater or smaller chance of becoming a deadlock victim than the other. However, if you use the SET DEADLOCK PRIORITY command for a particular session, then this session can be assigned a value of Low, Normal, or High; setting the priority of this session’s transaction over another session’s transaction. This allows the DBA or developer to control which session is more important that another when it comes to deadlocks. A value of 0 indicates no priority has been assigned to this process.

·         Log Used: This is the amount of log space used by the transaction up to the point the deadlock occurs. SQL Server uses this information to help it determine which transaction has used up the most resources so far, so that the transaction that has used the least resources is killed and rolled back, helping to minimize the amount of resources used to deal with the deadlock.

·         Owner ID: This is the internal reference number for the transaction that is occurring.

·         Transaction Descriptor: This is an internal reference number that indicates the state of the transaction.

 

As you can see, there is a lot of data provided, but it is not all that useful unless you have an intimate knowledge of the internal workings of SQL Server. What is more useful is the tooltip. It lists the exact Transact-SQL code that was executed to cause the deadlock to occur.

 

Now that we have the Process Node definitions down, let’s take a more detailed look at what figure 4 is telling us. First, we know it was the winning transaction because it does not have a blue cross through it. Second, it provides the exact Transact-SQL code that was running that caused the deadlock. This is extremely useful information because it allows us to trace the event to specific problematic code. Third, it tells us that this Process Node has an exclusive lock on the top Resource Node (the X represents an exclusive lock). And fourth, it tells us that it has requested another exclusive lock on the bottom Resource Node. When you look at this Process Node in isolation, this is not a big deal. The problem occurs when this transactions bumps heads with another transaction, as we find out next.

 

Figure 5: This transaction is the deadlock victim.

 

On the left side of the Deadlock graph (figure 5 above) is the other Process Node. Like the winning Process Node, this node tells us the following: First, this was the losing transaction. Second, it provides the Transact-SQL code that contributed to the deadlock. Third, it tells us that is had an exclusive lock on the bottom Resource Node. Fourth, it tells us that it requested an update lock on the top Resource node. We’ll talk more about the locking conflicts shortly, but for now, let’s look at the two Resource Nodes.

 

Figure 6: The resource nodes tell us what resources the transactions were fighting over.

 

Both of these Resource Nodes represent indexes, which the two transactions needed access to in order to perform their requested work. Like Process Nodes, Resource Nodes have some definitions we need to learn.

 

·         HoBt ID: This number refers to a subset of data/index pages within a single partition. These may be in the form of a heap or a B-Tree. In SQL Server 2005, the HoBt ID is identical to the Partition ID found in the sys.partitions table.

·         Associated Objid: This is the object ID of the table associated with this index.

·         Index Name:  The name of the index.

 

The most useful information is the name of the index, which may be useful information when deciding how to best reduce or eliminate the deadlocks in question.

 

The top Resource Node represents the PK_SalesTaxRate_SalesTaxRateID index and the bottom Resource Node represents the PK_SalesTerritory_TerritoryID index.

 

Now that we have discussed all the details of this Deadlock graph, let’s bring all the pieces together.

 

1.       SPID 54 started a transaction, then requested and received an Exclusive lock on the PK_SalesTaxRate_SalesTaxRateID index.

2.       SPID 55 started a transaction, and then requested an Exclusive lock on the PK_SalesTerritory_TerritoryID index.

3.       SPID 55, as part of the same transaction, then requested an Update lock on the PK_SalesTaxRate_SalesTaxRateID index. However, this lock was not granted because SPID 54 already had an Exclusive lock on the index. In most cases, this means that SPID 55 has to wait its turn before it can get an Update lock on PK_SalesTaxRate_SalesTaxRateID. At this point, SPID 54 is causing a blocking lock on SPID 55.

4.       As the above blocking lock is continuing, SPID 54 wants to complete its transaction. In step 1 above, it had only started the transaction, it had not completed it. Now, SPID 54 wants to complete the transaction. In order to do this, it must get an Exclusive lock on PK_SalesTerritory_TerritoryID. The problem is that it can’t get a lock on this index because SPID 55 already has an Exclusive lock on it. Now we have a deadlock. Neither SPID can continue because each transaction is locking out the other transaction from finishing. Because this is not a good thing, SQL Server looks at the two transactions and decides to kill the one that has used up the least amount of resources so far. In this case, SPID 55 has used up 220 units of the Log and SPID 54 has used 1612 units of the log. This indicates that SPID 55 should be killed because it has used fewer resources so far.

5.       SQL Server kills SPID 55 and the transactions is rolled back, which releases the Exclusive lock on PK_SalesTerritory_TerritoryID, now allowing SPID 54 to get an Exclusive lock on it and to complete the transaction.

 

You may have to read this section several times in order to grasp all the activity that I have just described. It is not particular easy to follow. However, once you grasp what the Deadlock Graph is telling you, you are now in a better position to identify the code and/or objects that are contributing to the deadlocking problem, allowing an opportunity to fix it. In most cases, this will require developers to get involved. Fortunately, you now have the information you need to share with the developers so they can remedy the problem.

 

Reducing Deadlocking Problems

Many different things can contribute to the cause of a deadlock in SQL Server. Below are some suggestions on how to eliminate, or at least mitigate, deadlocking problems in your applications. This list is only a starting point and should not be considered a complete list of the options you have to prevent or reduce deadlocking. You may want to share this list with your developers, along with the information you identified during your trace.

 

•              Ensure the database design is properly normalized.

•              Have the application access database objects in the same order every time.

•              Keep transactions as short as possible.

•              During transactions, don't allow any user input.

•              Avoid cursors.

•              Consider reducing lock escalation by using the ROWLOCK or PAGLOCK hint.

•              Consider using the NOLOCK hint to prevent locking.

•              Use as low a level of isolation as possible for user connections.

 

Summary

As you can see, Profiler can be a very powerful tool to help DBAs identify deadlocking problems. Creating a deadlock Profiler trace is simple to create and run. So if you are currently experiencing any deadlocking issues in your databases, take the time now to apply what you have learned in this article. The sooner you give this a try, the closer you will be to getting your deadlocking problems resolved.

posted by LifeisSimple
2013. 1. 14. 14:23 BookStory

SQL Server Tacklebox

By Rodney Landrum,

DBA Handbooks

SQL Server TackleboxSQL Server Tacklebox - Essential Tools and Scripts for the day-to-day DBA by Rodney Landrum


Download eBook (PDF): Free
Printed Book: $29.99
Download the ePub version: Free

For the day-to-day DBA, there are dangers around every corner; monsters waiting to strike down the unsuspecting database with a resource-hungry query, an inefficient data load, or even outright data corruption. When the worst happens, and SQL Server is down, or performance is suffering, customers are quick to complain.. During such difficult periods, you, the DBA, are the centre of attention. Suddenly, three or four people crowd your desk, laser beam focused on your every action, fully expecting you to solve the problem quickly.

The success of your career as a DBA depends largely on how well-prepared you are to deal with such situations. Without verified backups, sound server documentation, and a ready supply of tried-and tested troubleshooting tools, there is no safe place to hide when the monster rears up. All you can do is buy some time, patch things up as best you can and then duck away into your cubicle, which, if you lost any data in the process, may soon be empty.

However, with the tools and techniques provided in the SQL Server Tacklebox, you will be fully-equipped to fight back. Inside, you'll find scripts that will allow you to automate and standardize SQL Server installation, document and report on your servers, migrate data and manage data growth, troubleshoot performance issues, receive notifications of impending danger, secure access to your servers and fight off data corruption.

In short, you'll be able to respond quickly and professionally to the many issues that blight the waking, and often sleeping, hours of the DBA. Then, one day, who knows? That cubicle may turn into an executive office with wall-to-wall tinted windows revealing a flowing brook outside, where no monsters live.

View Related Books:


Editorial Review

For the day-to-day DBA, there are dangers around every corner; monsters waiting to strike down the unsuspecting database with a resource-hungry query, an inefficient data load, or even outright data corruption. When the worst happens, and SQL Server is down, or performance is suffering, customers are quick to complain.. During such difficult periods, you, the DBA, are the centre of attention. Suddenly, three or four people crowd your desk, laser beam focused on your every action, fully expecting you to solve the problem quickly.

The success of your career as a DBA depends largely on how well-prepared you are to deal with such situations. Without verified backups, sound server documentation, and a ready supply of tried-and tested troubleshooting tools, there is no safe place to hide when the monster rears up. All you can do is buy some time, patch things up as best you can and then duck away into your cubicle, which, if you lost any data in the process, may soon be empty.

However, with the tools and techniques provided in the SQL Server Tacklebox, you will be fully-equipped to fight back. Inside, you'll find scripts that will allow you to automate and standardize SQL Server installation, document and report on your servers, migrate data and manage data growth, troubleshoot performance issues, receive notifications of impending danger, secure access to your servers and fight off data corruption.

In short, you'll be able to respond quickly and professionally to the many issues that blight the waking, and often sleeping, hours of the DBA. Then, one day, who knows? That cubicle may turn into an executive office with wall-to-wall tinted windows revealing a flowing brook outside, where no monsters live.

About the Author

Rodney Landrum has been working with SQL Server technologies for longer than he can remember (he turned 40 in May of 2009 so his memory is going). He writes regularly about many SQL Server technologies, including Integration Services, Analysis Services, and Reporting Services. He has authored three books on Reporting Services. He is a regular contributor to SQL Server Magazine and Simple-Talk, the latter of which he sporadically blogs on about SQL and his plethora of geek tattoos. His day job finds him overseeing the health and well-being of a large SQL Server infrastructure in Pensacola, Florida. He swears he owns the expression "Working with Databases on a Day to Day Basis" and anyone who disagrees is itching to arm wrestle. Rodney is also a SQL Server MVP.

'BookStory' 카테고리의 다른 글

[독서] 완벽의 추구  (0) 2016.11.04
[독서] 혁신의 설계자  (0) 2016.11.04
사고싶은 책들~~~  (0) 2012.05.25
[도서] iSad ...  (0) 2011.10.07
[도서] 관심을 가지고 있는 도서  (0) 2011.09.20
posted by LifeisSimple
2013. 1. 14. 13:21 Photograph by ../일상

아이폰의 고질적인 문제인 홈버튼... 


딱밤도 해보고 별짓을 다 해보는데.. 결국은 큰 효과는 없네요.. 


이럴때 요런 방법을 사용하면... 나름의 효과를 거둘수 있다고 합니다. 



http://tvcast.naver.com/v/23086



위의 동영상을 참고~

posted by LifeisSimple
2013. 1. 14. 11:26 Brain Trainning/DataBase

MSSQL 과 MySQL 의 함수 비교자료 입니다. 

MSSQL 을 사용하다 MySQL 도입을 검토하거나 그 반대의 경우 나름의 유용한 자료가 될 듯 합니다.


January 7, 2013

Comparing SQL Server and MySQL Functions

By Rob Gravelle

This article is the second part in a series examining some of the challenges in porting your databases between MS SQL Server and MySQL.  In Part 1, we looked at some of the differences between data types utilized by each DBMS.  Today, in moving on to functions, you’ll see how functions may require additional effort on your part to produce equivalent results.

Function Categories

Functions can be divided into three distinct categories:

  1. Equivalent Functions: Those that can be safely migrated from one database type to another without any modifications whatsoever.
  2. Emulated Functions: Functions that are available in one database, but not the other.  Another problematic issue is that some MySQL functions have a variable parameter count.  In either case, some conversion work is required. 
  3. Non-supported Functions: Those which cannot be easily ported because of logical/physical organization and security model differences.

Equivalent Functions

You’ll be happy to know that the following functions are usable in both MySQL and SQL Server queries without any modifications:

ASCII, LEFT, LOWER, LTRIM, REPLACE, REVERSE, RIGHT, RTRIM, SOUNDEX, SPACE, SUBSTRING, UPPER, ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, DEGREES, EXP, FLOOR, LOG, LOG10, PI, POWER, RADIANS, RAND, ROUND, SIGN, SIN, SQRT, TAN, DAY, MONTH, COALESCE, NULLIF, CAST, CONVERT.

Emulated Functions

Functions that have no equivalent on the other platform are where the bulk of your efforts will go, as converting these can be like trying to fit a round peg into a square hole.

The Transact-SQL
CASE function

CASE WHEN @a > @b 
     THEN @a 
     ELSE @b - @a 
END 

This can be converted to the MySQL  IF(expr1, expr2, expr3)  function.  Here’s how it works:

If expr1 is TRUE (expr1 <> 0 and expr1 <> NULL), IF() returns expr2; otherwise it returns expr3.

MySQL example:

if(@a>@b, @a, @b-@a) 
Converting Binary Data into a Varchar

IN SQL SERVER 2008 the convert function was extended to support binary data to hex string conversion. Hence, you might see something like this:

CONVERT(NVARCHAR(34), 0xc23eed6b65c93e44a41a2818e274194f, 1) AS BINARY_TO_STRING

The MySQL BIN(N)  function, which returns a string representation of the binary value of N, can be utilized in its stead.

The Transact-SQL  DATALENGTH Function

This one is easy to convert because both the SQL Server DATALENGTH and MySQL BIT_LENGTH functions will return the length of a string in bits.

String Concatenation

SQL Server does not support the ANSI SQL CONCAT() function.  Instead, it uses the plus operator (+) for string concatenation:

'A'+'B'+'C', 'A'+'#'+'B'+'#'+'C' 

In MySQL, use the CONCAT(str1, str2, ….) or CONCAT_WS(separator, str1, str2, ...)  functions, which return the concatenated arguments:

CONCAT('A','B','C'), CONCAT_WS('#','A','B','C') 
Converting Numbers between Different Number Bases

It is sometimes useful to convert a number to a non-base 10 string. In SQL Server, that requires using the CAST function or employing a user-defined function.  In MySQL, you can forget about all that, as the ANSI SQL CONV(N, from_base, to_base)  function will allow you to convert from one base to another with ease.

Finding the Position of the First Occurrence of a Substring within a String

The Transact-SQL CHARINDEX function maps exactly to the ANSI  SQL LOCATE() function.

Inserting a String within Another

In SQL Server, the REPLACE function can be used to replace part of a string with another. For instance, the following example replaces the string def in abcdefghi with xyz.

SELECT REPLACE('abcdefghicde','def','xyz');

MySQL’s INSERT(str, pos, len, newstr) function is a reasonable facsimile, as it returns the string str, with the substring that begins at position pos and is len characters long replaced by the string newstr.

Loading Data and Statements from a File

T-SQL bulk load statements and extended stored procedures that load data and executable statements from a text file can be replace with LOAD_FILE(file_name)  in MySQL.

Getting the Current Date

Transact-SQL’s  NOW function maps to GETDATE in ANSI SQL.

Generating a Repeating String

Transact-SQL’s  REPLICATE function maps exactly to REPEAT in ANSI SQL.

Testing for NULL

Transact-SQL relies on the CASE and IS NULL clauses to check for NULL values. In MySQL, you can simply use the ISNULL(expr) function instead.  If expr is NULL, ISNULL() returns 1; otherwise it returns 0.

Comparing Two Strings

Transact-SQL relies on comparison operators to compare strings, whereas ANSI SQL provides the STRCMP(expr1, expr2)  function.

Formatting Dates

While Transact-SQL uses a combination of date, string, and convert functions to format dates as strings, ANSI SQL has the built-in DATE_FORMAT(date, format) function specifically for formatting dates.

Adding an Interval to a Given Date

The Transact-SQL DATEADD function does have equivalents in Oracle, DB2, and PostgreSQL. MySQL includes the same function, except that it’s called DATE_ADD:

SELECT DATE_ADD('2010-12-31 23:59:59', INTERVAL 1 DAY);

returns '2011-01-01 23:59:59'
Converting between Seconds and a Time

In Transact-SQL, converting between seconds and a time such as 12:34:00 can be accomplished using a combination of the CONVERT and DATEADD functions.  For instance, here is a statement that converts seconds to a time:

CONVERT(char(8), DATEADD(second, Diff, '0:00:00'), 108)

MySQL can convert between seconds and a time more easily using the SEC_TO_TIME(seconds)  and TIME_TO_SEC(time) functions.

Retrieving the Last Inserted ID

The Transact-SQL @@IDENTITY and SCOPE_IDENTITY functions are used to retrieve the last inserted ID.  MySQL possesses a similar function called LAST_INSERT_ID for this purpose.

Concatenating Column Values

To concatenate the contents of a column into a string requires a few steps in T-SQL:

declare @v varchar(max) 
set @v='' 

select @v=@v+','+isnull(field_a,'') from table_1 
select substring(@v,2,len(@v)) 

It’s much easier in ANSI SQL, thanks to the GROUP_CONCAT function.  It comes in two flavors to support 
different formats:

  • GROUP_CONCAT( Language SEPARATOR ‘-’ ) will use the dash instead of  the default comma separator.
  • SELECT GROUP_CONCAT( Language ORDER BY Language DESC ) can be used to change the sorting order.

Note that GROUP_CONCAT ignores NULL values.

Non-supported Functions

Any SQL Server-centric functions have to be either removed and/or rewritten using a combination of ANSI SQL statements.  Once completed, the new code can be saved as a user-defined function for easy reuse.

Conversion Tools

There are purportedly some automated tools that can convert stored procedures between SQL Server and MySQL, such as SQLWays by Ispirer.   According to their site and anecdotal reports, it converts stored procedures, functions, packages and triggers.  All this automation doesn’t come cheap; at about a grand USD, it may be more cost effective to manually convert your procs.


출처 : http://www.databasejournal.com/features/mysql/comparing-sql-server-and-mysql-functions.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+DatabaseJournalNews+%28Database+Journal+News%29&utm_content=Google+Feedfetcher

posted by LifeisSimple
2013. 1. 9. 09:46 Brain Trainning/DataBase

SQL Server and Anti-Virus


Every once in a while, one of my SQL Server Consulting clients asks me for my opinions about running Anti-Virus on production SQL Servers. And, amazingly enough, I actually (personally feel and) argue that anti-virus really shouldn’t be run in production data centers. Along those lines, I personally haven’t run anti-virus for over a decade on any of my own machines. Though, on the other hand, I do recommend that non-technical friends and family all run anti-virus (even though I tell them it typically won’t keep ‘crapware’ off of their computers in far too many cases).

Why I Dislike Anti-Virus Solutions On Production Servers

Simply stated, anti-virus solutions cost money to license, can be expensive to maintain, and can commonly cause expensive problems in production – such as when anti-virus solutions block custom code or configuration options or when they block access to critical IO requests.

More importantly, though, I feel that anti-virus solutions on production servers can sadly provide a sense of false-security to many NON-TECHNICAL managers – who falsely sometimes assume that anti-virus is some kind of a magical ‘security blanket’. Because the reality is that anti-virus solutions are far from magical; they can’t simply stop all inbound attacks by virtue of the fact that you pay money for protection from viruses. Instead, anti-virus solutions simply chronicle vast ‘definitions’ of recognized and cataloged attack-vectors and then ‘monitor’ system-activity for similar attack vectors and signatures during run-time.

In other words: anti-virus solutions can’t protect against zero-day exploits, and can only focus on ‘known’ exploits instead. Or, at least, that’s the theory – because recent testing and validation shows that most anti-virus solutions can’t even keep up with the rapid pace of new exploits very well.

Consequently, if you curtail or restrict access on production servers to only competent IT folks (who won’t be browsing the web or opening email attachments from their RDP sessions on production servers), and keep your servers well fire-walled and patched, then you’ll typically never really even going to need to worry about viruses. Or, stated differently, since the vast majority of viruses either take advantage of well-known exploits or require users to ‘invite’ these virus in, then avoiding ‘risky behavior’ is going to do a much better job of protecting against viruses than software can. In most cases.

SQL Server and Anti-Virus

Another huge concern when it comes to your data, SQL Server, and anti-virus solutions is the fact that anti-virus solutions simply can’t protect against bigger security threats and problemsthat exist in your applications or environment because of your code and practices. In other words: anti-virus software monitors system activity for attacks against well-known vulnerabilities – something that’s not going to be of any use if you’re running custom software to manage your data.

Which, in turn, is where things like SQL Injection come into play – because anti-virus solutions simply can’t protect against coding problems or application vulnerabilities within YOUR business applications as anti-virus is only, really, designed to protect against known vulnerabilities.

SQL Server, Anti-Virus, and Regulatory Compliance (oh my!)

Regardless of my own thoughts about whether or not anti-virus is actually worthwhile on production servers, the reality is that regulatory compliance is riddled with the need for anti-virus – meaning that if you’re looking to become PCI compliant, or trying to stay ahead of SOXor HIPPA, you’re going to need anti-virus.

In my mind, anti-virus is required for these types of compliance primarily either because the legislators who wrote these laws were either non-technical enough to assume that anti-virus is ‘magical’ in some way or another to the point where it had to be included, or that they were (most likely) susceptible to anti-virus lobbyists when these regulations were being written. And that’s not to say that there isn’t any benefit to having anti-virus on production servers. Admins are, after all, human and prone to mistakes. My point, however, is that regulatory compliance isn’t a magical ‘seal of security’ or panacea against being hacked – as, for example, PCI compliant companies (sadly) get hacked and experience data leaks and other problems – even when running anti-virus.

Of course, regardless of how I feel about anti-virus and regulatory compliance, there’s no way you’re going to be able to get around this requirement – and I’m certainly not advocating that you try and argue with auditors that anti-virus isn’t ‘worth it’ or arguing that merely having ‘smart IT’ folks on those servers is protection enough. Because they won’t buy it – and that’ll cause all sorts of problems (i.e., it’ll give them the wrong idea, raise all sorts of red flags, and cost you more headaches and your organization tons of additional cost).

So, long story short: don’t bank on anti-virus as being able to provide you with any ‘real’ protection IF you’re already practicing ‘safe computing’ out on your servers. But, by the same token, in many situations, you’re going to need to run anti-virus on your SQL Servers.

Anti-Virus and SQL Server – Playing Nicely Together

Simply put, the best way to get SQL Server and anti-virus programs to play nicely together is to think in terms of compartmentalization. Or, in other words: let anti-virus programs deal with what they do best, and let SQL Server handle what it does best and avoid, at all possible costs, any interaction between the two. 

So, for example, if you’ve properly configured and secured SQL Server (and if you’re using it correctly) it will only need access to a handful of resources on your server including:

  • Binaries. Or the the paths to the actual executables for any of your running SQL Server Services (MSSQL, SQL Server Agent, SSAS, etc). Typically these are found, by default, in the C:\Program Files\Microsoft SQL Server folder – though this could easily be a different path on many production machines. (And, note, you’ll likely want to make sure that C:\Program Files (x86)\Microsoft SQL Server is included in any exclusions as well on x64 machines).
  • SQL Server Error Logs. Not your database log files, but the text files that SQL Server uses to keep its own ‘event logs’ running or up-to-date. (Which, in turn is also different than Windows’ system event logs as well.) By default the path to these files is, in turn, covered in the paths outlined above – or it’s part of the ‘program files’ data associated with your binaries – though you CAN move the location of these logs if desired (as an advanced operation via the startup parameters).)
  • Data And Log Files. Yup – your actual .mdf, .ndf, and .ldf files – or the locations of your data files and log files. (Which you’ll want to make sure get excluded from anything that anti-virus monitors – otherwise creation of new databases, file-growth operations, and other normal ‘stuff’ can/will get blocked by anti-virus operations – which would be fatal in many cases.)
  • Backups. Yup, the path to any of your backups – or backup locations is also something you’ll want to make sure that anti-virus doesn’t monitor.

Accordingly, to get anti-virus to play nicely with SQL Server, you’ll want to make sure that it’s been instructed to exclude all of the paths listed above from any type of scans or real-time monitoring. (Likewise, if your anti-virus tries to monitor processes, you’ll want to make sure  that it stays away from all of your SQL Server Services such as MMSSQLSERVER (sqlservr.exe), the Full-Text Daemon, the SQL Server Agent, the sqlwriter.exe process, and any other services you might be running (such as MSDTC, SSAS, SSRS, integration services, and so on).

From here, you can let anti-virus do whatever it needs to do and monitor overall system interactions, operations, and processes as needed – but without doing any monitoring of SQL Server. And, in my experience, once you’ve correctly configured anti-virus and SQL Server to avoid any type of interactions or overlap, then you’ll hardly even notice or remember that your SQL Server host is even running anti-virus – which is exactly the situation that you want to be in.

출처 : http://www.sqlmag.com/blog/practical-sql-server-45/sql-server-2012/sql-server-antivirus-144988?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+SQLBytes+%28SQL%3ENews%3ENews%29&utm_content=Google+Feedfetcher

posted by LifeisSimple
prev 1 next