블로그 이미지
LifeisSimple

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

Notice

2010. 6. 3. 12:02 Brain Trainning/DataBase
1. Make sure each of your production databases are available and accessible
to their users.
2. Check all processes - such as clustering, replication, service
broker, log shipping, database mirroring - to verify that they are running correctly.
3. Check OS, SQL Server, Security, and other logs for unusual events.
4. Verify that all scheduled jobs have run successfully, including checking for long running or "hung" jobs.
5. Confirm that backups have been made and successfully saved to a secure
location.
6. Monitor disk space to ensure your SQL Servers won't run out of disk space. For best performance, all disks should have 15% or more of free space.
7. Throughout the day, periodically monitor SQL Server performance using Performance Monitor, Profiler/SQL Trace, or other performance monitoring
tools.
8. Regularly monitor and identify blocking issues.
9. Keep a log of any changes you make to servers, including documentation
of any performance issues you identify and correct.
10. Regularly restore backups to a test server in order to verify that you can restore them. You don't need to restore all backups every day, but do so often to ensure that you are confident you have good backups.
11. Take some time to learn something new as a DBA to further your professional development.
12. Automate as many of these day-to-day tasks as possible.

This list of best practices is not intended to be complete, but to get you started on your way to becoming an exceptional DBA. Always keep in mind that these suggestions are generic, and may or may not apply to your specific situation.
Brad McGehee's
Day-to-Day DBA Best Practices

출처: RedGate

'Brain Trainning > DataBase' 카테고리의 다른 글

SQL Patch 정보  (0) 2010.06.18
MSSql 2005 SP 적용  (0) 2010.06.18
Optimising Server-Side Paging - Part II (펌)  (0) 2010.05.24
Replication Across Non-Trusted Domains (펌)  (0) 2010.05.17
Optimising Server-Side Paging (펌)  (0) 2010.05.04
posted by LifeisSimple
2010. 5. 17. 19:07 Brain Trainning/DataBase

Replication Across Non-Trusted Domains or Using the Internet

Replication over the internet or across non-trusted domains is usually performed using a virtual private network (VPN), and consequently the configuration is much the same as that used on a LAN. This article outlines what to do if such a VPN is not available. While it's true that in almost all cases you wouldn't consider making SQL Server available directly to the internet, loosely -coupled networks in a hosted environment are commonplace. These systems ensure integrity by the use of firewall rules, resulting in DMZ/ECZ layers and causing serious issues for the would-be replicator. This is very much a 'how-to' guide, as the intricacies of network connectivity setup would otherwise make the article too long.
In replicating over non-trusted networks, the main additional considerations over and above a normal setup are:

(a) when configuring the subscriber, the publisher will not initially be visible in enterprise manager, and cannot be added in the usual way.

(b) after the snapshot has been created, it is normally accessed by the merge/distribution agent using the repldata share, which will be unavailable over the internet.

(c) the replication agents normally run under the context of a windows user account and are impersonated at the distributor/publisher, however an AD lookup at the publisher when accessing the snapshot files will not be able to find the windows user.

What follows is a list of the configuration changes necessary to solve these issues.

(i) Network Configuration - Ports etc

SQL communication needs to be enabled on the port defined at the publisher. For a default instance this is typically port 1433 while FTP uses port 21 by default. So, rules allowing the subscriber's IP address access on these ports through the firewall need to be set up, implying that the subscriber must not get a leased DHCP IP address, but have a fixed address of its own. In SQL Server 2005 merge replication is possible over port 443 (SSL/HTTP) - more on that in another article - but for our case, the security is provided by the use of logins and passwords, firewall rules, and encryption if necessary.

(ii) Publication Configuration

The repldata share on the publisher that is normally used to hold the snapshot initialization files will be unavailable - discounting the use of a guest windows login at the publisher which is simply too insecure a method to consider - so an alternative initialization method is needed. The main choices are automatic FTP and alternative snapshot locations at the subscriber. For the latter, when a pull subscription is created, it can be pointed at a local directory for the snapshot files. I often use this method as my snapshot files are so large (>40GB) that I save a lot of time doing the whole thing manually. Basically, this involves the creation of the snapshot, compressing the directory using WinZip 9.0, manually FTPing to the subscriber then unziping locally. After that I use the alternative snapshot location (@alt_snapshot_folder) on the subscriber to initialize. This method is used because although compression using CAB files is available within the publication wizard, it is limited to a maximum size of 2GB.


This article however will concentrate on the inbuilt, automatic method of FTP which is configured at the publisher on the snapshot location tab. If all the subscribers need to be set up in the same way (non-trusted domains etc) then on this tab only the second checkbox is selected at the top. The snapshot folder lies under the FTPROOT directory and 'Subscribers can access this folder using FTP' obviously needs to be selected. If you do choose to compress, the files will be uncompressed using the Temp folder of the subscriber, so you must check that you have enough space. The client path is the path from the FTP root directory, so in this case is \TRSnapshot\ftp. The login can be left as the default for anonymous access (not recommended) or a valid windows login at the publisher is selected that has read access to the FTPROOT directory – PaulI in this case.

(iii) alias configuration at subscriber

In order for the subscriber to 'see' the publisher, an alias must be created in the client network utility before registering the publisher in Enterprise Manager. This is mandatory as the use of an IP address in Enterprise Manager will later result in various errors. Be sure to:
(a) use TCP/IP and not named pipes, otherwise you'll get a "SQL server does not exist or access is denied." error.
(b) create an alias that has exactly the same name as the publisher/distributor.


(iv) hosts file configuration at the subscriber

The FTP request at the subscriber needs to be able to associate the publisher/distributor's NETBIOS name with its IP address. As we have a non-trusted environment, this entry won't exist on the DNS on the subscribers’s LAN, so we manually add it to the HOSTS file on the subscriber, located in C:\WINNT\system32\drivers\etc. After the hashed out section, the file should look like the text below, where DOHXXX-SQL is the Publisher/Distributor's netbios name.

127.0.0.1 localhost
62.6.139.105 DOHXXX-SQL


If this is omitted, the error message will be the FTP initialization error : “Message: The process could not connect to FTP site 'DOHCOL-SQL' using port 21”.

(v) Create a PULL Subscription

When using the Pull subscription wizard, the ‘Choose Publication’ form allows browsing of the publisher’s publications. The publisher can be added to EM at this stage or already be configured. If everything has been correctly set up using the previous steps, the publications list should be visible. The synchronization agent login should be set to use a SQL login (pass-through is possible but let’s leave that for now) already added to the PAL. If the subscriber is registered at the publisher, this can be a named subscription, otherwise it’ll be anonymous. The only other thing to set up is to select the option to ‘Download using FTP’ on the Snapshot Delivery form.

posted by LifeisSimple
2010. 4. 28. 15:12 Brain Trainning/DataBase

Consolidating SQL Server Error Logs from Multiple Instances Using SSIS
14 April 2010

SQL Server hides a lot of very useful information in its error log files. Unfortunately, the process of hunting through all these logs, file-by-file, server-by-server, can cause a problem. Rodney Landrum offers a solution which will allow you to pull error log records from multiple servers into a central database, for analysis and reporting with T-SQL.

QL Server hides a wealth of useful information inside its error log files. Aside from containing the details of specific errors that have occurred, as it name implies it must, there are details of successful and unsuccessful login attempts, the output of DBCC commands, and more. You can even raise your own errors or events to the log, for example to indicate when an 'index rebuild' process is complete.

Unfortunately, I use the term "hides" advisedly. Although it puts a lot of useful information in there, getting it out again in a useful, easy-to-digest format is often a long and painful trial, unless of course you're the sort of DBA who cherishes nothing more in life than a few free hours each morning to scroll through hundreds of thousands of SQL Server error log entries, server-by-server, using the Log Viewer application, raptly focused on each and every interesting message.

What would be useful to all other DBAs, I hope, is the means to consolidate the error log entries for all your servers into a central location, and to report on them for there, quickly firing off SQL queries that could tell you instantly, for example:

  • "When was the last time X logged in or accessed the server?"
  • "Have there been any prolific login failure attempts by unknown accounts?"
  • "Have any of these specific errors occurred on any of my servers in the last x days"

The solution I offer here uses an SSIS package to pull error log data from the current error log on any number of remote servers, and load the data into a staging table in a central database. From here, any new log records are pushed into a permanent table using the new SQL 2008 MERGE command. From here the log records can be diced and sliced with T-SQL, to your heart's content.

Shortcomings of the Standard Error Log Reader

Before we dive into our custom "error log consolidation" solution, let's take a brief look at the SQL Server Log Viewer application for SQL Server 2005 and beyond. Figure 1 shows the Log Viewer window, with an applied filter, which searches for messages containing the term "failed", across multiple error logs, on a server instance namedSRVSAT (one that is not much-used).

Figure 1 – Standard SQL Server 2005 Log Viewer

Notice that, while I'm specifically search the SQL Server error logs here, I can also choose to view the SQL Server AgentDatabase Mail and Windows NT logs. There is also a search function that lets us find specific log entries based on a keyword.

In my filtered search of the SQL Server logs, you can see that there are two entries on this particular server that contain the word "failed"; one is a login failure for sa and the other is a failure in opening a database file. The filter capability is limited, however, in the sense that a filtered search for "Failed" or "Login Failed" will return login failure results. However, a filtered search on "Failed Logins" you will return no results.

Overall, the Log Viewer is a reasonable ad-hoc tool for tracking down errors on a given server. However, the fact that the Log viewer can only be directed at one server at a time makes it a rater tedious means of tracking down specific errors across multiple servers. Also, its filter and searching capabilities certainly do not match what can be achieved with the power of T-SQL.

Custom Error Log Consolidation

As noted in the Introduction, this solution is built on SSIS. I first began using SSIS back in 2007, when I developed the "DBA Repository" solution, described in full detail in my SQL Server Tacklebox book. It uses an SSIS package to execute a range of useful scripts across all servers under my care, and pull the results back into a central SQL Server database, for reporting and analysis. A fringe benefit is that it's been relatively easy to update and adapt this original solution to accommodate new requirements, such as the error log consolidation I describe here.

My central database, DBA_Rep, is housed on a SQL Server 2008 instance and contains all manner of tables in which to store various bits of useful SQL Server information. In order to support my error log consolidation, I simply needed to add two new tables to this database:

  • StageErrorLog – a temporary holding table for the error log data read from each server.
  • SQL_ErrorLog – the permanent storage table for the error log data

The two tables' schemas are identical except that the permanent table, SQL_ErrorLog, contains an identity column called LogID. The code to create these tables is included as part of the code download bundle for this article, along with the package code, sample queries and so on.

As I describe the error log solution, I'll focus on those aspect most closely related to gathering and analyzing the log data, rather than on the specific SSIS techniques use, for example, to assign object and string variables in theForEachLoop Containers within the packages. The latter is covered in detail in one of my previous Simple-Talkarticles.

The SSIS Package

The SSIS package that in a single run polls, consolidates and merges the error logs from any number of servers is really quite simple. A single screen shot, shown in Figure 2, captures the entire package, comprising just five steps.

Figure 2 – Error Log Reader SSIS Package

Let's examine each step in turn, with most emphasis on the Error Log Reader for-each loop, and the MERGE step.

Get the Server Names into an Object Variable

The first task "Populate ADO Variable For SQL Server Names" populates a SQL_RS object variable with the list of the servers from which we will be extracting error log information. In a later step, our package will iterate through this list and populate a string variable, SRV_Conn. This, in turn, is used to dynamically update the ServerNameproperty of the MultiServer Connection Manager object, which controls the the connection to each SQL Server instance we need to interrogate. There is also a second Connection Manager object, DBA_Rep, which is used to connect to the central repository. This whole step is described in fuller detail in my previous article, Using SSIS to monitor SQL Server Databases.

Figure 3 – Connection Manager Objects

Truncate the Staging table

The second step is straightforward and simply issues as single T-SQL command, TRUNCATE TABLE StageErrorLog, to empty the contents of the StageErrorLog temporary table, which temporarily stores error log data for each server, as the package executes.

The Error Log Reader ForEachLoop Container

The "Error Log Reader" ForEachLoop container forms the heart of the package, and performs two important functions:

  1. It sets the connection string for each server using the two variables SQL_RS and SRV_Conn
  2. It executes the objects contained within, namely "Create Temp Table for Error Logs" and "Get Error Logs From All Servers".

Figure 4 shows the SSIS Task Editor for the Create Temp Table for Error Logs task. It uses the MultiServerConnection Manager to create a temporary table on the remote server, in which to store the error log data during package execution.

Figure 4 – Create Temp Table For Error Logs Task Editor

The T-SQL statement that is executed within this task is shown in Listing 1. First, it drops (if it exists) and creates on the remote server, in TempDB, a temp table, wErrorLog and then populates the table with the output of thesp_readerrorlog system stored procedure.

The sp_readerrorlog takes a parameter value, in this case "0", which instructs it to read only the current log file. While it is possible to read historic logs, I decided that only reading the current log would improve performance. I run the task, daily to minimize the risk of losing log data, and run an additional step once a month that interrogates all of the available logs to catch anything that may have been missed. The three actions that I know will initiate a new error log on SQL Server are running sp_cycle_erroglog, running DBCC ERRORLOG and restarting SQL Server.

Enumerating through multiple log files
If you choose to interrogate all the log files, rather than just the current one, a script that may prove useful is included in my DBA Script Thumb.

The final step in the Read Error Logs script alters the temp table wErrorLog to add a Server column and then updates the records with the ServerProperty(ServerName) function. This step is critical as it allows us to distinguish from which server the error logs came, once the records are all merged into the final destination table (a step I will explain in short order).

--Read Error Logs

 

DECLARE @TSQL NVARCHAR(2000)

IF EXISTS ( SELECT  *

            FROM    tempdb.dbo.sysobjects

            WHERE   id = OBJECT_ID(N'[tempdb].[dbo].[wErrorLog]') )

    DROP TABLE [tempdb].[dbo].[wErrorLog] ;

 

CREATE TABLE [tempdb].[dbo].[wErrorLog]

    (

      LogDate DATETIME ,

      ProcessInfo NVARCHAR(50) ,

      [Text] NVARCHAR(MAX)

    )

 

INSERT  INTO [tempdb].[dbo].[wErrorLog]

        EXEC sp_readerrorlog 0

 

ALTER TABLE [tempdb].[dbo].[wErrorLog]

ADD Server VARCHAR(100) NULL ;

GO

 

UPDATE  [tempdb].[dbo].[wErrorLog]

SET     Server = CONVERT(VARCHAR(100), SERVERPROPERTY('Servername'))

 

SELECT  *

FROM    [tempdb].[dbo].[wErrorLog]

Listing 1 – The Read Error Logs script, executed by the Create Temp Table for Error Logs task

The output of the sp_readerrorlog stored procedure, executed on a SQL Server 2005 instance, can be seen in Figure 5. Notice that three columns are returned, LogDateProcessInfo and Text.

Figure 5 – Output of sp_readerrorlog

When the package executes it will create and populate the wErrorLog temp table on each of the remote servers. The next step is to pull the data from each of the wErrorLog temp tables into the staging table, StageErrorLog, in the DBA_Rep central repository.

So why do we need temp tables on each remote server, as well as a staging table in the central database? The main reason is to provide the next task in this container, the Get Error Logs from All Servers data flow task, a valid source table on the remote server from which to select. This task pulls all of the error log data from the remote servers into a single staging table in my central SQL Server 2008 database, DBA_REP. From there, in the final step in the package (discussed shortly) I can use the MERGE command to move any new error log records into theSQL_ErrorLog table. Some of the remotes servers were running SQL 2005 instances, so in fact it was not possible to run the MERGE command directly against these SQL Server 2005 sources, as it would fail syntactically.

Figure 6 shows the Get Error Logs From All Servers data flow task with the Server Source (wErrorLog) and the Repository Destination (StageErrorLog).

Figure 6 – Data Flow Task for Error Log

Merging the Error Log Data into the Central Database

After all of the remote servers bear their error log fruit into the StageErrorLog basket all that remains is to merge that data into its final destination, SQL_ErrorLog, which can then be used for analysis via standard T-SQL queries.

I chose to use the MERGE statement for a number of reasons. I felt that the typical "truncate and refresh solution" is inefficient for large data loads. I could have built an equivalent "UPSERT" solution, without MERGE, by using IF EXISTS constructs to control INSERTs and UPDATEs, as well as DELETEs, but MERGE meant I could avoid all of that work. Finally, I just wanted a good excuse to experiment with a new command, and this was a perfect project for that!

I have not found an equivalent of MERGE in the SSIS Toolbox. There are some third party tools that will provide this functionality, at a cost, but instead I decided simply to execute the MERGE command within an Execute SQL task.

The MERGE command is shown in Listing 2. It looks at matched and unmatched (NOT MATCHED) records in the target table, StageErrorLog, based on all four field values (LogDateProcessInfoText and Server) and INSERTs new records into SQL_ErrorLog. I know I will never DELETE or UPDATE the records, so new INSERTs are all I am concerned about.

MERGE SQL_ErrorLog AS Target

    USING StageErrorLog AS Source

    ON ( Source.LogDate = Target.LogDate

         AND Source.ProcessInfo = Target.ProcessInfo

         AND Source.Text = Target.Text

         AND Source.Server = Target.Server

       )

    WHEN NOT MATCHED BY TARGET

        THEN

INSERT  (

          LogDate ,

          ProcessInfo ,

          Text ,

          Server

        ) VALUES

        ( Source.LogDate ,

          Source.Processinfo ,

          Source.Text ,

          Source.Server

        ) ;

Listing 2 – merging new log records into the SQL_ErrorLog table

Executing and Testing the Package

So that is it, five fairly simple steps. Now all we have to do is execute the package to start consolidating logs for all of the SQL Servers. I would imagine that the list of servers would be limited to critical production environments, where logs must, by mandate, be stored for audit purposes and for problem/resolution analysis. Figure 7 shows the package executing, with each task turning green as it completes successfully.

Figure 7 – Executing the Error Log Reader Package

It is quite easy to determine if the package is working as expected. If you have Audit Logins enabled, you can simply, on a database in your test environment, deliberately generate failed login attempts. You can then rerun the package and make sure that only those new records were inserted in the SQL_ErrorLog table. In my test environment, I have two servers. On the last execution of the package, I had 951 records in the SQL_ErrorLog table, 778 for Server1 and 173 for Server2, which the following query showed.

SELECT  COUNT(*) ,

        Server

FROM    SQL_ErrorLog

GROUP BY Server

After a couple of failed login attempts, using SQL authenticated users "sa" and "bad user", I reran the package and the row counts increased as expected by only a handful of records per server making the new totals 779 and 178 for Server1 and Server2 respectively. I verified that the failed logins were in the SQL_ErrorLog table by executing the following query:

SELECT  Text ,

        COUNT(Text) Number_Of_Attempts

FROM    SQL_ErrorLog

WHERE   Text LIKE '%failed%'

        AND ProcessInfo = 'LOGON'

GROUP BY Text

As Figure 8 shows, "bad user" tried to login and I captured it via the package execution. The important part is that only the news records were inserted, so the MERGE statement worked as anticipated.

Figure 8 – Failed Login Attempts From SQL_ErrorLog table.

Querying the Error Log Data

With the records loaded and the package tested, it is time to get creative with the T-SQL analysis of the data. If you know the specific errors (or log entries I should say) for which you are looking, such as DBCC or memory or deadlocks, then you can simply use a LIKE or EXISTS statement as in the following two queries that scan for "CPU" and "MEMORY". Of course you can do this with one query, but I broke them out for emphasis.

SELECT  LogDate ,

        Text

FROM    SQL_ErrorLog

WHERE   Text LIKE '%memory%'

 

SELECT  LogDate ,

        Text

FROM    SQL_ErrorLog

WHERE   Text LIKE '%CPU%'

The results are already proving valuable as you can see from the highlighted rows regarding memory, In Figure 9. Apparently I am not using all the memory that I could do, and SQL Server 2008 is helping me by telling me what to do to increase the amount of memory SQL Server can utilize. Thanks, Error Log.

Figure 9 – Memory- and CPU-related data in SQL_ErrorLog

Of course, as DBAs, I will leave it to you to be to be creative in the development of useful queries for your environment. Heck, you can even add a Full Text index if you like. In fact, if you notice in the code download, theSQL_ErrorLog DDL adds an identity column called LogID to facilitate the creation of such a Full Text index. That process is an article in itself and one which I will tackle next.

Conclusion (with a few small caveats)

I do not like putting the caveat section so close the end of the article but you may have guessed by now that there are a few and I will just note them really quickly and move on to the conclusion before you have a chance to utter "Ah Shucks". First, this solution only pulls error logs from SQL Server 2005 and higher, due to the different way thesp_readerrorlog data is returned in earlier versions. Second, the solution does not pull SQL Agent logs or Mail Logs, like the Log Viewer application does. Finally, the package and MERGE statement therein requires SQL Server 2008 to house the DBA_Rep database. Not too horrible after all, I hope. If I were to have said it only works on SQL Server 2000 that might have been a problem.

I hope this error log consolidation solution, based on SSIS, and making use of the new SQL 2008 MERGEcommand, proves useful to many DBAs. If it also saves you lots of time and money then great; I did my small part for the economy.



This article has been viewed 2304 times.

'Brain Trainning > DataBase' 카테고리의 다른 글

Server Side Paging with SQL Server 2005 (펌)  (0) 2010.05.04
System Benchmark Tool  (0) 2010.05.03
Managing Max Degree of Parallelism (펌)  (0) 2010.04.28
페이징쿼리 속도 비교 (펌글)  (0) 2010.04.19
Not In Query  (0) 2010.04.15
posted by LifeisSimple
2010. 4. 28. 14:53 Brain Trainning/DataBase

Managing Max Degree of Parallelism

By Herve Roggero, 2003/06/23

Total article views: 24782 | Views in the last 30 days: 229

Introduction

In situations where your tuned T-SQL statements are pushing the limits of your CPUs, more processing power may be needed. Deploying database servers on two, four or even eight SMP systems is rather straightforward. SQL Server usually scales almost in a linear fashion on up to eight processors. 

However, some SQL Server installations may require up to 32 processors. In this kind of environment, configuration parameters that are usually ignored in smaller configurations come into play and can offer significant performance improvements. We will take a look at the Maximum Degree of Parallelism (DOP) and see how and why it may make sense to change its default setting. 

Parallel Queries Performance Limitations

When adding processors to SQL Server, the database engine will evaluate how to best leverage the available processors through internal algorithms. In essence, when receiving a SQL statement to process, SQL Server determines which processors are available, what the overall cost of the query is and executes the query on as many processors as necessary if the cost of the query reaches a configurable threshold. When 4 processors are available on a server, the likelihood of SQL Server using all processors for a complex SELECT statement is pretty high. 

The same holds true in larger environments. For instance on 16 processors, SQL Server will frequently use 12 or more processors to execute complex SELECT statements. This may turn out to be an issue for a couple of reasons. First, using more processors means managing more threads and requires more cache synchronization. System -> Context Switches/Sec is a measure of this effort. The more processors are used for a process, the higher this counter will be. In addition, SQL Server has more coordination to perform since it needs to slice and regroup the work spread over the processors. Since by default SQL Server will use as many processors as it can, upgrading your SQL Server from 8 to 12 processors may actually degrade the overall performance of your database. Although there are no golden rules, it appears that in most cases using more than 8 processors for a SELECT statement can degrade performance (although this may vary greatly by system).

Enforcing a Maximum DOP

The DOP can be set in two ways. The first way is to include the OPTION (MAXDOP n) keyword in your T-SQL statement. For example, the following query will execute with a maximum of 4 processors, regardless of how many processors have been allocated to SQL Server:

SELECT * FROM master..sysprocesses OPTION (MAXDOP4)
The other approach is to set the maximum DOP at the database instance level, hence limiting the maximum number of CPUs to be used for any given query. To set this option at the system level, run the following command from Query Analyzer:
EXEC sp_configure 'show advanced option', '1'
RECONFIGURE
GO
sp_configure 'max degree of parallelism', 0
RECONFIGURE
GO

Note that this can be set differently for each instance of SQL Server. So if you have multiple SQL Server instances in the same server, it is possible to specify a different Maximum DOP value for each one.

On large SMP systems, setting the maximum DOP to 4 or 8 is not unusual. The default value for this parameter is 0, which allows SQL Server to use all allocated processors. The following test shows the Context Switches/Sec and average response time of a T-SQL statement running off a few million records. The server utilized for this test was loaded with the /PAE boot.ini option, 16 processors and 8GB of RAM. The statement is as follows (the statement itself is of little importance, but notice the OPTION keyword):

Select (UnitPrice - UnitCost) * TotalUnitsSold
        FROM Salesdb..salesdata (NOLOCK)
        WHERE
        SalesYear = 2000
        GROUP BY UPC
        ORDER BY 1
        OPTION (MAXDOP 2)

This statement was loaded 500 times in a table in a format that Profiler could understand. Then four Profilers were loaded on that same server, each running the content of the same table. So SQL Server was receiving four select statements at once. Note the (NOLOCK) hint that forces SQL Server to read the data without generating any locks.

The results are as follows:

DOP Context Switches/Sec Avg Execution Time
2 4280 12
4 5700 7.5
8 10,100 6
12 11,200 8.5
16 13000 9

As more processors are added to the query (by using the MAXDOP option), the Context Switches/Sec increases up to 13,000, which is expected behavior. This is really a low number, considering that we are only executing 4 statements at any single point in time.

This graph shows that starting at 12 processors, the execution time degrades. Although it takes 12 seconds to execute this statement on 2 processors, it takes about 6 seconds on eight CPUs. However, we see that setting the DOP to 12 or 16 degrades the overall performance of our query when compared to a DOP of 8.

Leaving the default Maximum Degree of Parallelism value of 0 would yield the same result as the DOP of 16 in our test. Hence, changing the DOP to 8 in our scenario would provide a 30%  performance improvement over a DOP of 0 (or 16).

Enforcing a system-wide Maximum DOP is a good practice since this allows you to control the maximum number of processors SQL Server will use at any given time, regardless of the statement, as long as the MAXDOP is not used in the query (which would override the global Maximum DOP setting).

Conclusion

SQL Server has many parameters that give you more control on the performance of your databases. Understanding how SQL Server behaves on servers with 8 processors or less gives a strong understanding of the capabilities of SQL Server. However, SQL Server offers specific configuration parameters that may give you extra performance on larger systems.

The Maximum Degree of Parallelism is a key parameter for environments with 8 or more processors, and allows you to gain control on the maximum number of processors used for a query. When deciding which DOP you should use, careful evaluation of your environment is needed. Certain queries may perform better with a DOP of 4, or even 1. Testing your environment with multiple DOPs should give you the answer. In cases where your database environment functions in OLTP and OLAP mode (for live reporting), you may consider setting a default DOP for SQL Server that works best for your OLTP system and use the OPTION keyword for your OLAP T-SQL to use the DOP that works best for these queries.

Finally, SELECT statements are not the only types of statements that can take advantage of the DOP, specially if your action queries use correlated queries (in which a SELECT statement is found inside an UPDATE statement for example). The Maximum DOP is an advanced setting, and as such it is wise to test it thoroughly before making a decision in your production environment.

By Herve Roggero, 2003/06/23

Total article views: 24782 | Views in the last 30 days: 229


'Brain Trainning > DataBase' 카테고리의 다른 글

System Benchmark Tool  (0) 2010.05.03
SQL Server Error Logs from Multiple Instances Using SSIS (펌)  (0) 2010.04.28
페이징쿼리 속도 비교 (펌글)  (0) 2010.04.19
Not In Query  (0) 2010.04.15
Job List 적용  (0) 2010.04.14
posted by LifeisSimple
2010. 4. 14. 21:40 Brain Trainning/DataBase
몇가지 작업으로 Job을 이관할 수 있습니다.

일단. Job과 관련된 DB를 모두 복원을 우선 합니다.
Job관련 계정을 생성해주고...


백업한 원서버의 MSDB를 새로운 서버에 복구합니다.

그리고,
update dbo.sysjobsteps
 set database_name = 'senseSrvDB'
where database_name = 'SenseSrvDB'

여기에서 실행되는 DB를 Update 합니다. 혹시 동일한 이름일 경우는 Update가 필요없습니다.

실행아이디는 어떻게 하는지 아직... 모르겠네요 흠...

'Brain Trainning > DataBase' 카테고리의 다른 글

페이징쿼리 속도 비교 (펌글)  (0) 2010.04.19
Not In Query  (0) 2010.04.15
Partition Table 의 구현  (0) 2009.11.30
인덱스~  (0) 2009.11.24
for xml path  (0) 2009.11.22
posted by LifeisSimple
2009. 11. 30. 23:44 Brain Trainning/DataBase


create database PTable
on
(
 name = PTable_Data,
 filename = 'E:\90.Sql_SubData\PTable_Data.mdf',
 size = 10mb,
 filegrowth = 1mb 
)
log on
(
 name = PTable_Log,
 filename = 'E:\90.Sql_SubData\PTable_Log.ldf',
 size = 5mb,
 filegrowth = 1mb
)

alter database PTable
 add filegroup F2

alter database PTable
 add filegroup F3


alter database PTable
add file
(
 name = PTable_Data00,
 filename = 'E:\90.Sql_SubData\PTable_Data00.ndf',
 size = 10mb,
 filegrowth = 1mb
) to filegroup f2

alter database PTable
add file
(
 name = PTable_Data01,
 filename = 'E:\90.Sql_SubData\PTable_Data01.ndf',
 size = 10mb,
 filegrowth = 1mb
) to filegroup f3

alter database PTable
add filegroup f4

alter database PTable
add file
(
 name = PTable_Data02,
 filename = 'E:\90.Sql_SubData\PTable_Data02.ndf',
 size = 10mb,
 filegrowth = 1mb
) to filegroup f4

use credit
select * from credit..charge

use PTable

create partition function category_no (int) as range left for values (3, 5, 10)

create partition scheme category_nosc as partition category_no to ([primary], f2, f3, f4)


CREATE TABLE [dbo].[charge](
 [charge_no] int IDENTITY(1,1) NOT NULL,
 [member_no] int NOT NULL,
 [provider_no]  int NOT NULL,
 [category_no] int NOT NULL,
 [charge_dt] datetime NOT NULL,
 [charge_amt] money NOT NULL,
 [statement_no] int NOT NULL CONSTRAINT [charge_statement_no_default]  DEFAULT (0),
 [charge_code] char(2) NOT NULL CONSTRAINT [charge_status_default]  DEFAULT ('  '),
 CONSTRAINT [charge_ident] PRIMARY KEY CLUSTERED
(
 [charge_no] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON category_nosc (category_no)


insert into charge
 select * from credit.dbo.charge

/* 3번째 파티션의 정보를 확인 함 */
select * from charge where $partition.category_no(category_no) = 3

'Brain Trainning > DataBase' 카테고리의 다른 글

Not In Query  (0) 2010.04.15
Job List 적용  (0) 2010.04.14
인덱스~  (0) 2009.11.24
for xml path  (0) 2009.11.22
SQL_Server_2008_System_Views_Poster  (0) 2009.11.15
posted by LifeisSimple
2009. 11. 24. 00:30 Brain Trainning/DataBase

쩝... 클러스터인덱스 왜 Drop_existing을...

select *
 into mem
from member

sp_helpindex member
sp_helpindex mem

select * from mem

create unique clustered index IDX_Mem on mem(member_no)

select member_no, firstname, expr_dt
 from mem
where lastname < 'c'
order by member_no desc

select member_no, lastname --,firstname 
 from mem
where lastname < 'c'
order by member_no desc

create index IX_Mem on mem(lastname)
create index IX_Mem on mem( lastname, firstname) with drop_existing
drop index mem.IX_Mem

create unique clustered index IDX_Mem on mem(member_no desc) with (sort_in_tempdb=on, online=on, drop_existing =on)

select * from sys.sysindexes where id = object_id('mem')


dbcc traceon(3604)
select convert(int, 0x)
dbcc page(Credit, 1, 5312, 3)

'Brain Trainning > DataBase' 카테고리의 다른 글

Job List 적용  (0) 2010.04.14
Partition Table 의 구현  (0) 2009.11.30
for xml path  (0) 2009.11.22
SQL_Server_2008_System_Views_Poster  (0) 2009.11.15
쿼리 및 시스템 튜닝 자료  (0) 2009.11.12
posted by LifeisSimple
2009. 11. 22. 19:19 Brain Trainning/DataBase
FOR XML PATH 문을 활용해서 동일계층의 컬럼값을 한줄로..

with custgroups as
(
 select customerid,
  (select cast(employeeid as varchar(10)) + ';' as [text()] (의미없음-현혹되면 안됨)
   from ( select distinct employeeid from dbo.orders as O
    where O.customerid = c.customerid) as D
   order by employeeid
   for xml path('')) as custemps
  from dbo.customers as C
)
select customerid,
 case when custemps is null then null else min(customerid) over (partition by custemps) end as grp
from custgroups
order by grp, customerid;

'Brain Trainning > DataBase' 카테고리의 다른 글

Partition Table 의 구현  (0) 2009.11.30
인덱스~  (0) 2009.11.24
SQL_Server_2008_System_Views_Poster  (0) 2009.11.15
쿼리 및 시스템 튜닝 자료  (0) 2009.11.12
Sqlserver Host명 변경  (0) 2009.11.12
posted by LifeisSimple
2009. 11. 12. 21:17 Brain Trainning/DataBase

이런자료도 있었넴... 흠..

추가 자료... Performance Dashboard

시스템에 셋업하고 열심히 들여다 봄...

그리고, DMV 참고링크
http://msdn.microsoft.com/en-us/library/ms188754(SQL.90).aspx

'Brain Trainning > DataBase' 카테고리의 다른 글

for xml path  (0) 2009.11.22
SQL_Server_2008_System_Views_Poster  (0) 2009.11.15
Sqlserver Host명 변경  (0) 2009.11.12
Snapshot 생성  (0) 2009.11.11
Backup 및 Restore  (0) 2009.11.10
posted by LifeisSimple
2009. 11. 12. 20:29 Brain Trainning/DataBase
링크 : http://msdn.microsoft.com/ko-kr/library/ms143799.aspx

방법: SQL Server의 독립 실행형 인스턴스를 호스팅하는 컴퓨터 이름 바꾸기

SQL Server를 실행하는 컴퓨터의 이름을 변경하면 SQL Server 시작 시 새 이름이 인식됩니다. 컴퓨터 이름을 다시 설정하기 위해 설치 프로그램을 다시 실행할 필요는 없습니다. 대신 다음 단계에 따라 sys.servers에 저장되어 있고 시스템 함수 @@SERVERNAME으로 보고되는 시스템 메타데이터를 업데이트할 수 있습니다. @@SERVERNAME을 사용하거나 sys.servers에서 서버 이름을 쿼리하는 응용 프로그램이나 원격 연결을 위해 변경된 컴퓨터 이름을 반영하여 시스템 메타데이터를 업데이트해야 합니다.

다음 단계는 SQL Server 인스턴스의 이름 변경 작업에 사용할 수 없습니다. 이 단계는 인스턴스 이름에서 컴퓨터 이름에 해당하는 부분을 변경하는 경우에만 사용할 수 있습니다. 예를 들어 Instance1이라는 SQL Server 인스턴스를 호스팅하는 MB1이라는 컴퓨터의 이름을 다른 이름(예: MB2)으로 변경할 수 있습니다. 그러나 이름에서 인스턴스에 해당하는 Instance1은 변경되지 않고 유지됩니다. 이 예제의 경우 \\ComputerName\InstanceName은 \\MB1\Instance1에서 \\MB2\Instance1로 변경됩니다.

시작하기 전에

이름 바꾸기 프로세스를 시작하기 전에 다음 정보를 검토하십시오.

  • SQL Server 인스턴스가 SQL Server 장애 조치(Failover) 클러스터의 일부인 경우 컴퓨터의 이름을 바꾸는 프로세스는 독립 실행형 인스턴스를 호스팅하는 컴퓨터의 이름을 바꾸는 프로세스와 다릅니다.
  • SQL Server는 복제와 함께 로그 전달을 사용하는 경우를 제외하고 복제에 관련된 컴퓨터의 이름 바꾸기를 지원하지 않습니다. 주 컴퓨터가 영구적으로 손실되면 로그 전달의 보조 컴퓨터 이름을 바꿀 수 있습니다. 자세한 내용은 복제 및 로그 전달을 참조하십시오.
  • Reporting Services를 사용하도록 구성된 컴퓨터의 이름을 바꾸면 컴퓨터 이름이 변경된 후 Reporting Services를 사용하지 못할 수 있습니다. 자세한 내용은 보고서 서버 컴퓨터 이름 바꾸기를 참조하십시오.
  • 데이터베이스 미러링을 사용하도록 구성된 컴퓨터의 이름을 바꾸는 경우 이름 바꾸기 작업을 수행하기 전에 데이터베이스 미러링을 해제한 다음 데이터베이스 미러링을 새 컴퓨터 이름으로 다시 설정해야 합니다. 데이터베이스 미러링의 메타데이터는 새로운 컴퓨터 이름을 반영하도록 자동으로 업데이트되지 않습니다. 다음 단계에 따라 시스템 메타데이터를 업데이트하십시오.
  • 컴퓨터 이름에 대해 하드 코딩된 참조를 사용하는 Windows 그룹을 통해 SQL Server에 연결하는 사용자는 SQL Server에 연결할 수 없습니다. 이는 이름 바꾸기 후 Windows 그룹이 기존 컴퓨터 이름을 지정하는 경우에 발생할 수 있습니다. 이름 바꾸기 작업 후 Windows 그룹이 SQL Server와 연결되도록 하려면 새 컴퓨터 이름을 지정하도록 Windows 그룹을 업데이트해야 합니다.

SQL Server 서버를 다시 시작하면 새 컴퓨터 이름을 사용하여 SQL Server에 연결할 수 있습니다. @@SERVERNAME이 로컬 서버 인스턴스의 업데이트된 이름을 반환하도록 하려면 다음 중 해당 시나리오에 적용되는 절차를 직접 실행해야 합니다. 업데이트하는 컴퓨터에서 SQL Server의 기본 인스턴스를 호스팅하는지 아니면 명명된 인스턴스를 호스팅하는지 여부에 따라 사용할 절차가 달라집니다.

  • SQL Server의 기본 인스턴스를 호스팅하는 컴퓨터의 이름이 바뀐 경우 다음 절차를 실행합니다.

    sp_dropserver <old_name>
    GO
    sp_addserver <new_name>, local
    GO

    SQL Server 인스턴스를 다시 시작합니다.

  • SQL Server의 명명된 인스턴스를 호스팅하는 컴퓨터의 이름이 바뀐 경우 다음 절차를 실행합니다.

    sp_dropserver <old_name\instancename>
    GO
    sp_addserver <new_name\instancename>, local
    GO

    SQL Server 인스턴스를 다시 시작합니다.

컴퓨터의 이름이 변경되면 이전 컴퓨터 이름을 사용하던 모든 연결은 새 이름을 사용하여 연결되어야 합니다.

  • @@SERVERNAME 또는 sys.servers에서 정보를 선택합니다. @@SERVERNAME 함수에서 새 이름을 반환하고, sys.servers 테이블에 새 이름이 표시됩니다. 다음 예에서는 @@SERVERNAME을 사용하는 방법을 보여 줍니다.

    SELECT @@SERVERNAME AS 'Server Name'

원격 로그인 - 컴퓨터에서 원격 로그인을 사용하는 경우 sp_dropserver를 실행하면 다음과 유사한 오류가 발생할 수 있습니다.

Server: Msg 15190, Level 16, State 1, Procedure sp_dropserver, Line 44
There are still remote logins for the server 'SERVER1'.

오류를 해결하려면 이 서버에 대한 원격 로그인을 삭제해야 합니다.

  • 기본 인스턴스의 경우 다음 프로시저를 실행합니다.

    sp_dropremotelogin old_name
    GO
  • 명명된 인스턴스의 경우 다음 프로시저를 실행합니다.

    sp_dropremotelogin old_name\instancename
    GO

연결된 서버 구성 - 연결된 서버 구성은 컴퓨터 이름 바꾸기 작업의 영향을 받습니다. sp_addlinkedserver 또는 sp_setnetname을 사용하여 컴퓨터 이름 참조를 업데이트해야 합니다. 자세한 내용은 MSDN에서 sp_addlinkedserver 또는 sp_setnetname SQL Server 온라인 설명서 항목을 참조하십시오.

클라이언트 별칭 - 명명된 파이프를 사용하는 클라이언트 별칭은 컴퓨터 이름 바꾸기 작업의 영향을 받습니다. 예를 들어 명명된 파이프 프로토콜을 사용하여 SRVR1을 가리키는 "PROD_SRVR"이라는 별칭을 만든 경우 파이프 이름은 \\SRVR1\pipe\sql\query와 같습니다. 컴퓨터의 이름을 바꾸면 명명된 파이프의 경로가 더 이상 유효하지 않습니다. 명명된 파이프에 대한 자세한 내용은 MSDN에서 명명된 파이프를 사용하여 유효한 연결 문자열 만들기 SQL Server 온라인 설명서 항목을 참조하십시오.

 

출처 : MSDN


'Brain Trainning > DataBase' 카테고리의 다른 글

SQL_Server_2008_System_Views_Poster  (0) 2009.11.15
쿼리 및 시스템 튜닝 자료  (0) 2009.11.12
Snapshot 생성  (0) 2009.11.11
Backup 및 Restore  (0) 2009.11.10
테이블 파티션  (0) 2009.11.08
posted by LifeisSimple
prev 1 2 3 next