블로그 이미지
LifeisSimple

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

Notice

2011. 2. 25. 11:47 Brain Trainning/PRG Language
posted by LifeisSimple
2011. 2. 24. 13:56 Brain Trainning/DataBase

SELECT

o.name

, indexname=i.name

, i.index_id  

, reads=user_seeks + user_scans + user_lookups  

, writes =  user_updates  

, rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)

, CASE

                  WHEN s.user_updates < 1 THEN 100

                  ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates

  END AS reads_per_write

, 'DROP INDEX ' + QUOTENAME(i.name)

+ ' ON ' + QUOTENAME(c.name) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) as 'drop statement'

FROM sys.dm_db_index_usage_stats s 

INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id  

INNER JOIN sys.objects o on s.object_id = o.object_id

INNER JOIN sys.schemas c on o.schema_id = c.schema_id

WHERE OBJECTPROPERTY(s.object_id,'IsUserTable') = 1

AND s.database_id = DB_ID()  

AND i.type_desc = 'nonclustered'

AND i.is_primary_key = 0

AND i.is_unique_constraint = 0

AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000

ORDER BY reads

posted by LifeisSimple
2011. 2. 24. 11:28 Brain Trainning/DataBase


Transact SQL Code Library

Topic Sections

posted by LifeisSimple
2011. 2. 21. 18:37 Brain Trainning/DataBase

http://www.sqlservercentral.com/articles/Transaction+Log/72488/

Printed 2011/02/21 02:36AM

Why is my transaction log full?

By Gail Shaw, 2011/02/10

It’s an error that’s near-certain to ruin any DBA’s day:

Error: 9002, Severity: 17, State: 2.

The transaction log for database 'VeryImportant' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

“But, but, but...” mutters the DBA, “The log backups are running, so why is the log full?”

Well, failed or missing log backups are just one of the reasons for a transaction log to be full. There are several other possible causes.

Where to start?

Start by querying sys.databases and to see what the value of the column log_reuse_wait_desc is for the database mentioned in the error message.

 DECLARE @DatabaseName VARCHAR(50);
 SET @DatabaseName = 'VeryImportant'

 SELECT name, recovery_model_desc, log_reuse_wait_desc
   FROM sys.databases
   WHERE name = @DatabaseName

The value of the log_reuse_wait_desc column will show the current reason why log space cannot be reused. It is possible more than one thing is preventing log reuse. Sys.databases will only show one. Hence it is possible to resolve one problem, query sys.databases again and see a different log reuse wait reason.

The possible values for log_reuse_wait_desc are listed in Books Online. But first, before we get to an explanation of the various reasons, a short piece of theory.

Transaction log architecture

The transaction log is a circular file that's divided internally into a number of virtual log files (VLFs). SQL writes log entries sequentially into the VLFs. When one VLF fills up, SQL moves to the next one. When it reaches the end of the file, it will (ideally) go back to the beginning and start using the first VLF again (hence circular).

A VLF can be in one of two states:

  1. Active
  2. Inactive

Active

An active VLF is one that contains one or more log records that are needed by the database engine. The log records may be part of active transactions, they may be log records that are needed for replication or mirroring, they may be needed for a backup, they may be associated with changes that haven’t yet been written to the data file, etc.

Inactive

An inactive VLF is one that does not contain any log records needed by the database engine for any reason. There are no active transactions, the log records are not needed for replication or mirroring and all the changes associated with these transaction log records have been written to the data file. If the database is in full or bulk-logged recovery the log records in the VLF has been backed up.

Log reuse

For a VLF to be reused by SQL, it needs to be inactive. If all the VLFs in the log are active and SQL has filled the one it is writing to, the transaction log will grow or, if it cannot grow, data modifications will fail with error 9002. (The transaction log for database '<Database name>' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases)

The act of marking one or more VLFs as inactive is generally called log truncation. While this is a commonly used term, it’s a not a very accurate term. Truncate, according to the Oxford English dictionary means “shorten (something) by cutting off the top or the end”, however log truncation doesn’t shorten anything; it just makes the space available for reuse.

In addition, people often seem to think that truncate means to discard log records and break the log backup chain (probably from BACKUP LOG ... WITH TRUNCATE ONLY). It does not mean that.


In the above diagram, the transaction log has 4 VLFs (labelled A, B, C and D). This is for simplicity; most log files will have more. The first (A) and second (B) VLFs are inactive, and the third (C) and fourth (D) are active with SQL currently writing log entries into the "D" VLF. In that diagram, the (1) marks the log record of the last log backup, (2) marks the log record associated with the last checkpoint and (3) is the last log record written into the database.


After a number of data modifications, the log has wrapped around and SQL has now reused the first VLF and is about to start reusing the second. The third and fourth VLFs are still active. In this situation, unless something happens to mark the third and fourth VLFs as inactive the log will grow as soon as SQL has mostly used the second VLF. In this case, what is preventing the reuse is that the log backup has not run again.

If a log backup now runs, the third and fourth VLFs (C and D) would be marked inactive and would be available for SQL to reuse them.

Causes of delayed log truncation

These are the log reuse wait reasons from sys.databases

Active transactions

The active transaction log reuse wait reason means that there is an open transaction that is keeping the VLFs active.

When encountering this log reuse wait, two commands will help in identifying the source of the open transaction, DBCC OPENTRAN and sys.dm_exec_sessions

DBCC OPENTRAN lists information about the oldest open transaction in the database. The main piece of information needed from this, when investigating log growth caused by active transactions, is the SPID (Server Process ID) that is running the transaction.

Transaction information for database 'WebForums'.

Oldest active transaction:

SPID (server process ID): 51

UID (user ID) : -1

Name : user_transaction

LSN : (10861:3200:1)

Start time : Jan 14 2011 1:04:26:017AM

SID : 0x0105000000000005150000002e86f8cbc457a001b905c7e95e040000

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

In this example, the connection with SPID 51 has had a transaction open since 1AM on the 14th Jan (a late worker, clearly). While there’s nothing in the output of DBCC OPENTRAN that can identify who is running that transaction, the SPID can be used along with some of the DMVs to get that information.

SELECT host_name,program_name, original_login_name, st.text
   FROM sys.dm_exec_sessions es
      INNERJOIN sys.dm_exec_connections ec
          ON es.session_id = ec.session_id
      CROSSAPPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) st
   WHERE ec.session_id = 51


I’m using original login name, rather than login name in case there’s any impersonation occurring.

That should be enough information to make an educated decision whether to kill the connection, or to speak to the person running the query and ask them to stop, or to wait for it to finish if it’s too important to be rolled back.

Checkpoint

The checkpoint log reuse wait should be a transient condition in most circumstances. It will be most commonly seen in simple recovery model.

SQL Server automatically runs checkpoints on a regular basis to keep the number of log records that it would need to process during a database recovery. It runs checkpoints for a number of other reasons as well. For a full list, see http://msdn.microsoft.com/en-us/library/ms188748.aspx

If the checkpoint log reuse wait appears frequently or for prolonged periods occur it may indicate that the IO subsystem performance is inadequate for the volume of data that needs to be written during the checkpoint or that there are very large numbers of changed pages that the checkpoint needs to process.

Database backup

The active portions of the log are necessary for database backups, as the backup has to include at least that much of the log to ensure a consistent restore. As such, long-running database backups may result in log growth as the portions of the log necessary for the backup must be retained for the duration of the backup.

There is little that can be done if this wait type is regularly encountered and is causing problems, other than optimising the backups themselves (optimising the IO subsystem or striping the backups over multiple devices), converting to file/filegroup backups (with appropriate consideration of backup/restore requirements) or using compressed backups either natively (SQL 2008 Enterprise, SQL 2008 R2 Enterprise and Standard editions) or via a 3rd party tool, or other methods that reduce the time taken to back the database up.

Replication

Transactional replication uses the transaction log to identify changes that need to be replicated to subscribers.

Whenever a change is made to a replicated table, the log records associated with that change are marked as ‘pending replication’. The Log Reader Agent job, which is one of the components of transactional replication, reads along the log looking for such transactions and, when it finds one, it places the details of the change into the distribution database and marks the log entry as ‘replicated’.

If transactions are being marked ‘pending replication’ and the Log Reader is not running, then those log records will never be marked as ‘replicated’, and the VLFs that contain those log records will never become inactive, and the log will grow.

In addition to this, on SQL 2008 Enterprise edition, the Change Data Capture feature (CDC) uses the transaction log and the Log Reader Agent job in much the same way as transactional replication. Hence, if the CDC jobs are not running, the log space cannot be reused, much as if there was transactional replication with the Log Reader Agent job not running. NB, the log reuse wait description still shows Replication, not CDC.

The only kind of replication that uses the transaction log is transactional replication, and thus it is the only type of replication that can prevent log reuse. However, there was a bug in some versions of SQL 2005 that could result in a log reuse wait of Replication when the only replication present was snapshot replication. See http://blogs.msdn.com/b/sqlserverfaq/archive/2009/06/01/size-of-the-transaction-log-increasing-and-cannot-be-truncated-or-shrinked-due-to-snapshot-replication.aspx.

Database Mirroring

In database mirroring, the principal database sends log records to the mirror database. In synchronous mirroring, this occurs before the transaction is committed on the principal. In asynchronous mirroring, the log records are placed into a log send queue when the transaction is committed on the principal. The log records will then be sent to the mirror at some point afterwards.

If the mirroring is running asynchronous and the rate of transactions exceeds that which the link between the principal and mirror can handle, then the log on the principal will grow because the log records cannot be marked inactive until they have been sent to the mirror.

The same thing will happen in synchronous mirroring if the link between the principal and mirror drops and the mirroring becomes disconnected or if the mirroring session is suspended. The log records on the principal remain active until the mirror reconnects or the session is resumed and the log records are copied over to it, or until the mirroring is dropped.

Snapshot creation

This should also be a short-lived transient type. During the creation of a database snapshot, SQL has to read the source database’s log so that it can run crash-recovery on the snapshot in order to ensure that it is transactionally consistent. While it is doing this, the portions of the log necessary cannot be marked as inactive.

DBCC CHECKDB uses hidden database snapshots for its checks and so this wait may appear when running CheckDB even if there are no user-created snapshots.

Prolonged snapshot creation waits can be a result of very long-running transactions that need to all be rolled back for the snapshot creation, or an inefficient reading of the log due to a very large number of VLFs.

Log backup

Log backups are the one that most people know can prevent log reuse. In full and bulk-logged recovery models, VLFs cannot be marked as inactive until all the log records in that VLF have been backed up. The log backup wait reason will never appear in Simple Recovery.

If this log reuse wait reason appears, check that the log backups are correctly scheduled and are succeeding. Also check that the NO_TRUNCATE option has not been specified, as that option is only for backing up the log of a damaged database and will not ‘truncate’ any VLFs (as the option name implies).

Conclusion

This should cover the main reasons why a database’s transaction log may grow. Fixes for more complex problems are out of scope of this article, but this should at least help in knowing where to start looking.

Acknowledgements

Firstly a massive thank you to Paul Randal (blog|twitter) who kindly agreed to tech-edit this article and corrected a variety of errors small and large. Without his assistance this article would not have been worth reading.

Also thank you to Robert Davis (blog|twitter) who provided some comments and corrections around log use in database mirroring.

Finally thanks to Jason Brimhall (CirquedeSQLeil) (blog | twitter), Gus Gwynne (GSquared), Brandie Tarvin (blog|twitter), Wayne Sheffield and Stefan Krzywicki for correcting grammar errors, illogical sentences and an appalling dearth of punctuation.

As they say in published books, any errors that remain are solely mine.


Copyright © 2002-2011 Simple Talk Publishing. All Rights Reserved. Privacy Policy. Terms of Use. Report Abuse.
posted by LifeisSimple
2011. 2. 7. 17:04 Brain Trainning/DataBase
Collecting Performance Data into a SQL Server Table

It has been about 4 years since Brent Ozar posted his famous blog post on collecting performance counters. This information turned out to be extremely useful for both accidental and professional DBAs.  About 4 years later I ran into  Jonathan Allen’s article (Getting baseline and performance stats - the easy way.), which is an upgrade to Brent’s blog. Jonathan offers a slightly more sophisticated way of running the Perfmon process from a command line. By appending the proper parameters, this method does speed things up.

In this article, I would like to build on what Brent and Jonathan have written to  propose an even more flexible method for SQL Server performance data collection.

 typeperf.exe: Command-line performance-data collection

As Jonathan Allen mentions in his blog, typeperf.exe is a powerful command. Here is a screenshot of all parameters the command accepts and their short description (as the output of ‘typeperf.exe /?’ would show them):

As we can see, there are several options which allow us to save the output of the typeperf in different formats: CSV, TSV, BIN, SQL. (CSV = Comma Separated file, TSV = Tab Separated file, BIN = Binary file, SQL = SQL Server table)

And here is the moment when I start thinking about my preferred choice of format.

As a DBA, I do not like CSV much, unless I really need to export some trivial  data and email it to someone. With CSV there is also a security risk, since it is nothing but a text file saved on the file system; same goes for the TSV and the BIN formats.

Furthermore, the processing times are significant, since it is a two-step process: first we would have to wait for the counters to collect into the file, and then we would have to open them and manipulate the data so that we extract what interests us.

Now, wouldn’t it be great if we could have the performance data collected directly into our already secured SQL Server? (I talk about security because I can personally think of at least a few scenarios where even performance data in the wrong hands can cause a lot of trouble.)

Furthermore, if we could import our performance counters to SQL Server database, that would mean that we can query the data any time and we can write reusable code for the queries which will help us easily analyze data over and over again. It will also help us detect events, patterns, send notifications, if we wanted.

So, to get back on track: my choice for the performance data collection output is SQL.

How to collect performance data directly to SQL Server:

First, of course we need to set up a database which will contain our performance data.

For this exercise we will create a new database called ‘PerfmonCollector’ by using the following script:

CREATE DATABASE [PerfmonCollector] ON  PRIMARY

( NAME = N'PerfmonCollector', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PerfmonCollector.mdf' , SIZE = 51200KB ,FILEGROWTH = 10240KB )

 LOG ON

( NAME = N'PerfmonCollector_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\PerfmonCollector_log.ldf' , SIZE = 1024KB ,FILEGROWTH = 10%)

GO

Second we would need to connect Typeperf to SQL Server. Let’s run the ODBC Data Source Administrator (we can access it by clicking Run… and then ‘odbcad32.exe’).

The following screen will be presented:

In the tabs of the administrator we see ‘User DSN’ and ‘System DSN’. The difference is that the User DSN is visible only to the current user, and the System DSN is visible to all users of the machine, including the NT services.

So, let’s choose a ‘User DSN’, since we do not want anyone but us to access our database. Let’s create it:

1.       Click Add… button and select SQL Server driver type

2.       Click ‘Finish’. A Data Source wizard screen will show up:

3.       Fill in the name of the Data Source and the SQL Server instance name. In my case, I would go for ‘SQLServerDS’ and (local).

4.       In the next screen we would have to provide the login details:

I will use Windows authentication. Click next.

5.       In this screen it is important to select our ‘PerfmonCollector’ database:

Click Next.

6.       In this screen we would leave the settings as default:

Click Finish.

7.       In this screen you will be presented with an overview of the settings and with a chance to test our connection.

Click the ‘Test Data Source…’ and make sure that the test is successful.

Now that we have a database and a connection, the next step is to gather some counters and to save the results into our database.

Collecting the counters:

Let’s say that we want to collect the values from the counters mentioned in Jonathan Allen’s  blog post:

  • Memory - Available MBytes
  • Paging File - % Usage
  • Physical Disk - % Disk Time
  • Physical Disk - Avg. Disk Queue Length
  • Physical Disk - Avg. Disk sec/Read
  • Physical Disk - Avg. Disk sec/Write
  • Physical Disk - Disk Reads/sec
  • Physical Disk - Disk Writes/sec
  • Processor - % Processor Time
  • SQLServer:Buffer Manager - Buffer cache hit ratio
  • SQLServer:Buffer Manager - Page life expectancy
  • SQLServer:General Statistics - User Connections
  • SQLServer:Memory Manager - Memory Grants Pending
  • System - Processor Queue Length

What we need to do is create a text file on our file system, which contains the counters we need to collect. Keep in mind that there are 2 kinds of counters – machine counters and SQL Server specific counters. So if we have only one default instance of SQL Server on a machine and we would like to collect the performance counters, our text file will look like this:

\Memory\Available MBytes 
\Paging File(_Total)\% Usage 
\PhysicalDisk(* *)\% Disk Time 
\PhysicalDisk(* *)\Avg. Disk Queue Length 
\PhysicalDisk(* *)\Avg. Disk sec/Read 
\PhysicalDisk(* *)\Avg. Disk sec/Write 
\PhysicalDisk(* *)\Disk Reads/sec 
\PhysicalDisk(* *)\Disk Writes/sec 
\Processor(*)\% Processor Time 
\SQLServer:Buffer Manager\Buffer cache hit ratio 
\SQLServer:Buffer Manager\Page life expectancy 
\SQLServer:General Statistics\User Connections 
\SQLServer:Memory Manager\Memory Grants Pending 
\System\Processor Queue Length

It is a bit more complicated with the named instances of SQL Server. The text file containing the counters for a named instance would look like this:

\Memory\Available MBytes 
\Paging File(_Total)\% Usage 
\PhysicalDisk(* *)\% Disk Time 
\PhysicalDisk(* *)\Avg. Disk Queue Length 
\PhysicalDisk(* *)\Avg. Disk sec/Read 
\PhysicalDisk(* *)\Avg. Disk sec/Write 
\PhysicalDisk(* *)\Disk Reads/sec 
\PhysicalDisk(* *)\Disk Writes/sec 
\Processor(*)\% Processor Time 
\MSSQL$InstanceName:Buffer Manager\Buffer cache hit ratio 
\MSSQL$ InstanceName:Buffer Manager\Page life expectancy 
\MSSQL$ InstanceName:General Statistics\User Connections 
\MSSQL$ InstanceName:Memory Manager\Memory Grants Pending
\System\Processor Queue Length

As you can see, in the case of a named instance, we would have to manually edit the text file and input the name of the instance for which we need to collect counters.

Depending on how many servers we have and how many instances of SQL Server reside on one physical machine, we would group our text files accordingly.

Let’s say that we have one physical server and 4 SQL Server instances; in this case I would create one text file containing the counters for the physical server (including the counters for the default instance) and then create 3 more files containing only the named instances’ counters.

For this article, however, I would collect performance data only from my named instance (the name of my instance is ‘SQL2005’) and my server.

So, I will create a folder ‘CounterCollect’ in my C: drive, and in the folder I will place my ‘counters.txt’ file containing my list of counters as follows:

\Memory\Available MBytes 
\Paging File(_Total)\% Usage 
\PhysicalDisk(* *)\% Disk Time 
\PhysicalDisk(* *)\Avg. Disk Queue Length 
\PhysicalDisk(* *)\Avg. Disk sec/Read 
\PhysicalDisk(* *)\Avg. Disk sec/Write 
\PhysicalDisk(* *)\Disk Reads/sec 
\PhysicalDisk(* *)\Disk Writes/sec 
\Processor(*)\% Processor Time 
\MSSQL$SQL2005:Buffer Manager\Buffer cache hit ratio 
\MSSQL$ SQL2005:Buffer Manager\Page life expectancy 
\MSSQL$ SQL2005:General Statistics\User Connections 
\MSSQL$SQL2005:Memory Manager\Memory Grants Pending
\System\Processor Queue Length

And now comes the most interesting part: running the cmd command which will start our data collection:

TYPEPERF -f SQL -s ALF -cf “C:\CounterCollect\Counters.txt” -si 15 -o SQL:SQLServerDS!log1 -sc 4

Here is a short explanation of the parameters:

  • 'f’ is the output file format
  • s’ is the server from which we would like to collect counters
  • cf’ is the path to the text file which contains the counters
  • si’ is a sampling interval, in this case every 15 seconds
  • ‘o’ is the path to the output file, or in this case it is specifying the DSN we created earlier
  • ‘sc’ is how many samples to collect, in this case 4, which means that the process typeperf will run for 1 minute and will collect 4 samples.

As you notice, there is a ‘!log1’ after the DSN name. This is a way to give a name to our performance data collection set. For example, instead of ‘log1’ we could put ‘beforeCodeRelease’.

Note: do not be surprised if your first sample is sometimes 0. This is how typeperf works. This is because typeperf is getting the delta (the value difference) between the sampled intervals.

The results:

Let’s look at our ‘PerfmonCollector’ database.

We can notice that there are 3 new tables in our database, which were created by the typeperf:

dbo.CounterData

dbo.CounterDetails

dbo.DisplayToID

Here is how part of the CounterData table looks:

 

What do they contain? Here is some information from MSDN:

The CounterData table contains a row for each counter that is collected at a particular time. There will be a large number of these rows.

The CounterData table defines the following fields:

  • GUID: GUID for this data set. Use this key to join with the DisplayToID table.
  • CounterID: Identifies the counter. Use this key to join with the CounterDetails  table.
  • RecordIndex: The sample index for a specific counter identifier and collection GUID. The value increases for each successive sample in this log file.
  • CounterDateTime: The time the collection was started, in UTC time.
  • CounterValue: The formatted value of the counter. This value may be zero for the first record if the counter requires two sample to compute a displayable value.
  • FirstValueA: Combine this 32-bit value with the value of FirstValueB to create the FirstValue member ofPDH_RAW_COUNTERFirstValueA contains the low order bits.
  • FirstValueB: Combine this 32-bit value with the value of FirstValueA to create the FirstValue member ofPDH_RAW_COUNTERFirstValueB contains the high order bits.
  • SecondValueA: Combine this 32-bit value with the value of SecondValueB to create the SecondValuemember of PDH_RAW_COUNTERSecondValueA contains the low order bits.
  • SecondValueB: Combine this 32-bit value with the value of SecondValueA to create the SecondValuemember of PDH_RAW_COUNTERSecondValueB contains the high order bits.

Information about the rest of the tables can be obtained from MSDN as well: DisplayToID ) ( and CounterDetails )

So, we have the data, let’s use it!

As I mentioned earlier, this method of collecting performance data is not only more secure than CSV+Excell, but also is more flexible. Remember, as we defined earlier our Perfmon collector command, we gave a name to our collector set. In this case we named it simply log1. For a real hands-on performance tuning sessions, though, we would like to name every set with its own meaningful name. (For example, let’s say that we would like to measure the server’s performance between 10am and 11am every day, when we are running a specific batch job.)

The name of the collector set is found in the DisplayToID table, in the DisplayString column. There we also see theLogStartTime and LogStopTime. The DisplayToID table is joined to the CounterData table by the GUID.

 For my test case in this article I am using two data collector sets called log1 and log2. Both sets are using the same counters as mentioned above.

 The first thing we would like to do is to verify how many different servers we have collected the data from. By running this query we can check:

 SELECT DISTINCT

        [MachineName]

FROM    dbo.CounterDetails   

In my case I would get only one server: \\ALF.

 Now let’s check what data collection sets we have and what their start and end times are:

 SELECT  [DisplayString] ,

        [LogStartTime] ,

        [LogStopTime]

FROM    dbo.DisplayToID 

Here is the result:

 

 Now let’s check the values we have collected for a specific counter for a specific server:

 SELECT  MachineName ,

        CounterName ,

        InstanceName ,

        CounterValue ,

        CounterDateTime ,

        DisplayString

FROM    dbo.CounterDetails cdt

        INNER JOIN dbo.CounterData cd ON cdt.CounterID = cd.CounterID

        INNER JOIN DisplayToID d ON d.GUID = cd.GUID

WHERE   MachineName = '\\ALF'

        AND ObjectName = 'Processor'

        AND cdt.CounterName = '% Processor Time'

        AND cdt.InstanceName = '_Total'

ORDER BY CounterDateTime

This query will return the Processor Total % utilization time as well as the counter collection time and the collector set name. Feel free to use this query as a template for exploring other counters as well.

 

And here is one more query which will give some aggregations:

SELECT  MachineName ,

        CounterName ,

        InstanceName ,

        MIN(CounterValue) AS minValue ,

        MAX(CounterValue) AS maxValue ,

        AVG(CounterValue) AS avgValue ,

        DisplayString

FROM    dbo.CounterDetails cdt

        INNER JOIN dbo.CounterData cd ON cdt.CounterID = cd.CounterID

        INNER JOIN dbo.DisplayToID d ON d.GUID = cd.GUID

WHERE   MachineName = '\\ALF'

        AND ObjectName = 'Processor'

        AND cdt.CounterName = '% Processor Time'

        AND cdt.InstanceName = '_Total'

GROUP BY MachineName ,

        CounterName ,

        InstanceName ,

        DisplayString 

Here is the result and as you can see it is quite easy to compare the two data collector sets.

 

From this point on, I am sure that any DBA would be able to easily write queries and find out performance events, patterns and tendencies.

Summary:

In this article I describe a flexible and secure method for collecting data from the collection of performance counters from servers and SQL Server instances. This method avoids the limitations of Excel spreadsheets, and brings great possibilities to the DBA to query the data directly, so as to home in on the cause of  performance problems (or the lack of them, hopefully!) in the monitored systems.



This article has been viewed 2153 times.
Feodor Georgiev

Author profile: Feodor Georgiev

Feodor has been working with SQL Server since 2002, starting on the 2000 version and mixing it up as newer versions - 2005 & 2008 - were released. He specializes in database performance tuning, documentation and scalability management. When he is not busy with his DBA work, keeping up with the latest SQL Server tricks or sharing tips on forums, he writes articles on SQL Service and his own blog. Feodor is also a member of the SQL Server user group in Sweden.

Search for other articles by Feodor Georgiev

Rate this article:   Avg rating:  from a total of 21 votes.


Poor

OK

Good

Great

Must read
 
Have Your Say
Do you have an opinion on this article? Then add your comment below:


Subject: A collecting performance data batch file for more than 100 computers
Posted by: Abel (not signed in)
Posted on: Monday, January 24, 2011 at 1:59 AM
Message: Great job! Thanks!

Following is my feedback! ^_^

1. Edit server.txt

SERVER01 user01 password01
SERVER02 user02 password02
...
more than 100 computers ....

or 

dsquery computer -limit 10000 > computers.txt 

to fetch domain computers name and do some modify to fit your need ...

2. command tools
grep.exe
sed.exe

3. Edit DSN (ODBC)

Create a database name: PViewer
Create a DSN name to mapping your PViewer Database : PViewer

3. FetchPerfData.bat

@echo off
REM ----------------------------------------
REM Set environment variable
REM ----------------------------------------
setlocal enableextensions
rem setlocal enabledelayedexpansion
set title=%~n0
REM ----------------------------------------

REM ----------------------------------------
REM Set console title
REM ----------------------------------------
title Generate Performance Counters and System Information
REM ----------------------------------------

REM ----------------------------------------
REM Set path variable
REM ----------------------------------------
path=%PATH%;C:\Program Files\Microsoft Office\Office12\
REM ----------------------------------------

REM ----------------------------------------
REM Set system date variable
REM ----------------------------------------
for /f "tokens=1-5 delims=/ " %%s in ("%date%") do (set CurrDate=%%s%%t%%u)
for /f "tokens=1-5 delims=:." %%d in ("%time%") do (set CurrTime=%%d%%e%%f)
set sysdate=%CurrDate%_%CurrTime%
REM ----------------------------------------

REM ----------------------------------------
REM Clear DNS record
REM ----------------------------------------
ipconfig /flushdns
REM ----------------------------------------

REM ----------------------------------------
REM Create work directory
REM ----------------------------------------
if not exist ".\Work" (mkdir .\Work) else (del .\Work\* /F/Q)
if not exist ".\Counter" (mkdir .\Counter) else (del .\Counter\* /F/Q)
if not exist ".\Output" (mkdir .\Output) else (del .\Output\* /F/Q)
REM ----------------------------------------

REM ----------------------------------------
REM Generating system information header
REM ----------------------------------------
systeminfo /fo csv > .\Work\HTemp.csv
sed -e "1d" -e "3d" .\Work\HTemp.csv > .\Work\STemp.csv
REM ----------------------------------------

REM ----------------------------------------
REM Generate Performance Counters
REM ----------------------------------------
for /f "tokens=1-3 delims= " %%S in (Servers.txt) do (

net use T: \\%%S\c$ /user:%%T %%U

start /wait typeperf -q Memory -s %%S -o .\Counter\%%S_Memory.cf
start /wait typeperf -cf .\Counter\%%S_Memory.cf -si 1 -sc 5 -y -f SQL -o SQL:PViewer!%%S

start /wait typeperf -q PhysicalDisk -s %%S -o .\Counter\%%S_PhysicalDisk.cf
start /wait typeperf -cf .\Counter\%%S_PhysicalDisk.cf -si 1 -sc 5 -y -f SQL -o SQL:PViewer!%%S

start /wait typeperf -q Processor -s %%S -o .\Counter\%%S_Processor.cf
start /wait typeperf -cf .\Counter\%%S_Processor.cf -si 1 -sc 5 -y -f SQL -o SQL:PViewer!%%S

start /wait typeperf -q "Network Interface" -s %%S -o .\Counter\%%S_Network.cf
start /wait typeperf -cf .\Counter\%%S_Network.cf -si 1 -sc 5 -y -f SQL -o SQL:PViewer!%%S

start /wait typeperf -cf .\Counter\%%S_Server.cf -si 1 -sc 5 -y -f SQL -o SQL:PViewer!%%S
start /wait typeperf -q Server -s %%S -o .\Counter\%%S_Server.cf

if "%%S"=="yourSQLServer" (
start /wait typeperf -qx "sqlserver:databases" -s %%S -o .\Counter\%%S_Databases.cf
start /wait typeperf -cf .\Counter\%%S_Databases.cf -si 1 -sc 5 -y -f SQL -o SQL:PViewer!%%S

start /wait typeperf -qx "sqlserver:buffer manager" -s %%S -o .\Counter\%%S_Buffer.cf
start /wait typeperf -cf .\Counter\%%S_Buffer.cf -si 1 -sc 5 -y -f SQL -o SQL:PViewer!%%S
)

systeminfo -s \\%%S -U %%T -P %%U /fo csv /nh >> .\Work\STemp.csv
net use T: /delete
)
REM ----------------------------------------

REM ----------------------------------------
REM Generate System information
REM ----------------------------------------
grep -v "^$" .\Work\STemp.csv > .\Output\systeminfo.csv
REM ----------------------------------------

REM ----------------------------------------
REM Delete Work files
REM ----------------------------------------
del .\Work\HTemp.csv /Q/F
del .\Work\STemp.csv /Q/F
REM ----------------------------------------


posted by LifeisSimple
2011. 1. 27. 14:10 Brain Trainning/DataBase
참 간만에 올려보는 시퀄관련 글... 


Changing Server Collation on SQL Server 2008

By Blesson John, 2010/09/20

Total article views: 5946 | Views in the last 30 days: 159

Editor: Please review my edits.

This article is dedicated to the Senior DBA Nick Mcnair,who showed me the command to alter server level collation.nRecently I hit a problem were the user database had a different collation from that of the server level collation. In this article we wil look at how to change the server level collation for a clustered SQL Server. When one changes the server collation he/she is rebuilding the system databases. This means you would lose references to user databases, jobs, SSIS packages, etc. In order to avoid the loss of information and reduce downtime, it is mandatory that a step by step procedure is followed.

On a side note the database collation can be changed using the command given below

ALTER DATABASE <database_name> COLLATE <collation>;
ALTER DATABASE adventureworks COLLATE SQL_Latin1_General_Cp850_BIN;

It is worth remembering that character fields such as varchar,char,text etc will have the old collation even after the above command is executed.

I have listed below some steps that I had to follow in order to change the server level collation.The steps given below aren't an exhaustive list, but they worked for me.

1) Run sp_configure to find the current server settings,for this you have to set 'show advanced option' to 1.

exec sp_configure 'show advanced options',1
reconfigure with override

Now run the sp_configure command to extract the server settings.

exec sp_configure

Copy the output into an excel sheet and save it in a safe location.You would need this after the collation has been changed.

2) Go through each of the user databases and check whether any user defined assemblies exist. Make a note of all the databases that have user defined assemblies.


 

3) Script out all the jobs on the server.

4) Check the database maintenance plan and make sure that you have noted down the schedules for each step in the maintenance plan.

5) Export all the SSIS packages to the file system.

6) check whether any mail profiles are present and copy the details of each profile to a table on one of the user databases.

(use msdb's sysmail_account,sysmail_profile,sysmail_profileaccount and sysmail_server to retrieve profile related information).

In order to create profile and account you can manipulate the code excert given below

BEGIN TRAN
SET @DESCRIPTION = 'give description about the profile ';
SET @DESCRIPTION = @DESCRIPTION + @V_MAIL_PROFILE ;
-- Create a Database Mail profile
 
EXECUTE msdb.dbo.sysmail_add_profile_sp
 @profile_name
= @V_MAIL_PROFILE,
 @description
= @DESCRIPTION;
 
SET @DESCRIPTION = 'give description about the account ';
 
SET @DESCRIPTION = @DESCRIPTION + @V_MAIL_ACCOUNT ;
-- Create a Database Mail account
 
EXECUTE msdb.dbo.sysmail_add_account_sp
 @account_name
= @V_MAIL_ACCOUNT,
 @description
= @DESCRIPTION,
 @email_address
= @EMAIL_ADDRESS,
 @replyto_address
= @EMAIL_ADDRESS,
 @display_name
= @DISPLAY_NAME,
 @mailserver_name
= @MAILSERVER_NAME;
-- Add the account to the profile
 
EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
 @profile_name
= @V_MAIL_PROFILE,
 @account_name
= @V_MAIL_ACCOUNT,
 @sequence_number
= @SEQUENCE_NUMBER;
-- Grant access to the profile to the DBMailUsers role
 
EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
 @profile_name
= @V_MAIL_PROFILE,
 @principal_id
= @PRINCIPAL_ID,
 @is_default
= @IS_DEFAULT;
COMMIT TRAN

 

7) :Script out the logins on server and prepare a list that contains the permissions provided for each login. You are lucky if all users belong to some windows group rather than individual Windows account/SQL accounts for each user.

8) Stop the replication agent if you have replication set up. Use the table sysarticles to get the list of objects that are being replicated only if you chose to use GUI to create replication.Scripting replication makes life easier,incase you wish to use scripts then refer to the link: http://technet.microsoft.com/en-us/library/ms151213.aspx

9) Detach all user defined databases. The sp_detach_db command is listed below.

  sp_detach_db  @dbname=  'AdTestDb' 
               
,@skipchecks= 'false' -- update statistics
               
,@keepfulltextindexfile ='true' --true if you don't want
                                 
--full text search index to be dropped

For more details refer the web link: http://technet.microsoft.com/en-us/library/ms188031.aspx

10) Take the SQL service offline from cluster manager (applicable for clustered environment).If it is a standalone system then shutdown the instance.

Now we can run this command from a command prompt

Setup /QUIET /ACTION=REBUILDDATABASE /INSTANCENAME=InstanceName 
/SQLSYSADMINACCOUNTS=accounts /[ SAPWD= StrongPassword ]
/SQLCOLLATION=CollationName

For more details about this command see this link: http://msdn.microsoft.com/en-us/library/ms179254.aspx

Once the command has been executed successfully restart the SQL Services from cluster manager. Check whether the collation has been reset to the required collation by checking server property

 


11) Attach all the user databases.I was for some reason unable to attach the user database using windows account (SQL Server 2008).Use the SQL login to attach the user databases if you come across some errors(probably because this stored proc is deprecated).

EXEC sp_attach_db @dbname = N'AdTestDb'
                 
,@filename1 = N'D:\Program Files\Microsoft SQL Server\MSSQL\Data\AdTestDb.mdf'
                 
,@filename2 = N'L:\Program Files\Microsoft SQL Server\MSSQL\Data\AdTestDb.ldf'

For more details refer the web link: http://msdn.microsoft.com/en-us/library/aa259611(SQL.80).aspx

12) All the databases that had user defined assemblies should have 'trustworthy' set to true.

ALTER DATABASE <DBNAME> SET TRUSTWORTHY ON;

13) Change the server settings using the results obtained in step1.

sp_configure 'name','runvalue'
reconfigure with overrride

14)Recreate the linked servers and logins.check for orphaned users using the command

sp_change_user_login --check orphaned users

For more details refer to this link: http://msdn.microsoft.com/en-us/library/ms175475.aspx

15) Create all the mail profiles(tailor the script given in step6 and use it).

16) Import the SSIS packages to the SQL Server from the file system.

17) Recreate all the jobs.

18) Create the maintenance plan using schedules noted in step4.

19) Setup replication based on the output obtained in step8.

Conclusion

Before you change the collation of production server,do perform a dry run in the development environment and test the jobs,SSIS packages,application etc.If all looks fine then rollout the change into production.The above list isn't exhaustive by any means but will act as a good starting point.



이렇게 변경한다고 합니다. 물론 master 리빌드를 해줘야 한다는 아픔이 있는 것 같습니다.
posted by LifeisSimple
2011. 1. 13. 11:01 Brain Trainning/DataBase
SP2를 적용하기 전에 우선 어떤것들이 개선되고 버그가 픽스됬는지 확인

패치는 다음과 같이 합니다. 

Apply a service pack or patch SQL Server 2008 in a clustered environment 
    Step 1 : Validation wizard 
    Step 2 : Pre-check
    Step 3 : Installation of the first node
    Step 4 : Add a new node 
    Step 5: Apply a service pack or hotfix on a SQL cluster 

Installing Service Pack SQL Server 2008 in failover cluster is very different than the SQL Server 2005 cluster failover.

With SQL Server 2005, when you start installing cluster service pack (or hotfix), it must be launched on the active node (node that hosts the instance). When installing the Setup will launch simultaneously  "remote silence" on all passive nodes. All nodes in the cluster containing the SQL Server instance are updated in the same time.

With SQL Server 2008, to reduce the downtime, we have revised the method of deployment. Now if you want to apply a service pack (or hotfix), you must install in first on the passive nodes. The passive nodes are updated before the active node.

Therefore, for your instance SQL Server 2008  in failover cluster, you must follow the scenario below for the application of Service Pack, Cumulative Update or Hotfix :

1.  Apply the hotfix on pasive node N2
2.  Reboot the passive node N2
3.  Failover on SQL resource : the passive node become the active node
4.  Apply the hotfix on the passive node N1
5.  Reboot the passive node N1

You can check the version of SQL server with the following query:

SELECT   

      SERVERPROPERTY('IsClustered') as _1_Means_Clustered ,   
      
SERVERPROPERTY('Edition') as Edition ,   
      
SERVERPROPERTY('ProductVersion') as Version  , 
      
      
SERVERPROPERTY('ComputerNamePhysicalNetBIOS') as ActiveNode
 


이건 개선된 기능 및 다운로드 링크

Overview


Microsoft SQL Server 2008 Service Pack 2 (SP2) is now available for download. These packages may be used to upgrade any edition of SQL Server 2008. We remain committed to providing our customers with essential changes via Service packs. Service Pack 2 contains updates for SQL Server Utility, Data-Tier Application (DAC), as well as integration capability for Microsoft Reporting Services with the Microsoft SharePoint 2010 Technologies. Service Pack 2 introduces support for a maximum of 15,000 partitions in a database, and includes SQL Server 2008 SP1 Cumulative Update 1 to 8.  While keeping product changes contained, we have made significant investments to ease deployment and management of Service Packs. Microsoft SQL Server 2008 Service Pack 1 is not a prerequisite for installing SQL Server 2008 Service Pack 2. 

For more information about SQL Server 2008 Service Pack 2, please visit our Release Notes

What's New:
  • Reporting Services in SharePoint Integrated Mode. SQL Server 2008 SP2 provides updates for Reporting Services integration with SharePoint products. SQL Server 2008 SP2 report servers can integrate with SharePoint 2010 products. SQL Server 2008 SP2 also provides a new add-in to support the integration of SQL Server 2008 R2 report servers with SharePoint 2007 products. For more information see the “What’s New in SharePoint Integration and SQL Server 2008 Service Pack 2 (SP2)” section in What's New (Reporting Services).
  • SQL Server 2008 R2 Application and Multi-Server Management Compatibility with SQL Server 2008.
    • SQL Server 2008 Instance Management.With SP2 applied, an instance of the SQL Server 2008 Database Engine can be enrolled with a SQL Server 2008 R2 Utility Control Point as a managed instance of SQL Server. For more information, seeOverview of SQL Server Utility in SQL Server 2008 R2 Books Online.
    • Data-tier Application (DAC) Support.Instances of the SQL Server 2008 Database Engine support all DAC operations delivered in SQL Server 2008 R2 after SP2 has been applied. You can deploy, upgrade, register, extract, and delete DACs. SP2 does not upgrade the SQL Server 2008 client tools to support DACs. You must use the SQL Server 2008 R2 client tools, such as SQL Server Management Studio, to perform DAC operations. A data-tier application is an entity that contains all of the database objects and instance objects used by an application. A DAC provides a single unit for authoring, deploying, and managing the data-tier objects. For more information, see Designing and Implementing Data-tier Applications.

Please Note:  SQL Server 2008 SP2 applies only to the SQL Server 2008 release.  This service pack can not be applied on top of SQL Server 2008 R2.


이건 픽스된 버그 : http://support.microsoft.com/kb/2285068

List of the bugs that are fixed in SQL Server 2008 Service Pack 2

On This Page

INTRODUCTION

This article lists the bugs that are fixed in Microsoft SQL Server 2008 Service Pack 2 (SP2).

Notes
  • Other fixes that are not documented may be included in the service pack.
  • This list will be updated when more articles are released.
For more information about how to obtain SQL Server 2008 service packs, click the following article number to view the article in the Microsoft Knowledge Base:
968382  How to obtain the latest service pack for SQL Server 2008

MORE INFORMATION

In addition to the fixes that are listed in this article, SQL Server 2008 SP2 contains the hotfixes that were included in Cumulative Update 1 through Cumulative Update 8 for SQL Server 2008 Service Pack 1 (SP1). For more information about the cumulative update packages that are available for SQL Server 2008 SP1, click the following article number to view the article in the Microsoft Knowledge Base:
970365 The SQL Server 2008 builds that were released after SQL Server 2008 SP1 was released 

The bugs that are fixed in this service pack

For more information about the bugs that are fixed in SQL Server 2008 SP2, click the following article numbers to view the articles in the Microsoft Knowledge Base.
VSTS bug numberKB article numberDescription
249682 2216454  FIX: Error message when you run the ProcessAdd command after you make changes to the partition on a server that is running SQL Server 2008: "Internal Error: An unexpected exception occurred"
256736 2322209  FIX: Incorrect results if you use parentheses around SELECT in a statement that has a common table expression in SQL Server 2008
258076 2216456  FIX: SQL Server 2008 Service Pack 2 enhancements to the “Operation has been cancelled” error message text in Analysis Services
270561 2216460  Error message when you try to open a project in SQL Server 2008: "The located assembly's manifest definition does not match the assembly reference"
274280 2216461  SQL Server 2008 setup fails to install on a Windows Server 2008-based cluster mount point
284360 955850  Error message when you try to use the binaries from the SQL Server System CLR Types package (SQLSysClrTypes.msi): "Unable to load DLL 'SqlServerSpatial.dll': This application has failed to start because the application configuration is incorrect"
301603 2216465  Slipstream installation of SQL Server 2008 fails if the /PCUSOURCE or /CUSOURCE has a space in the path
317800 2216472  A SQL Server 2005 Analysis Services drillthrough statement result is empty
340066 2216474  FIX: You may experience some loss of data when you use an UPDATE statement with a SET clause that uses the time, datetime2 and datetimeoffset types in SQL Server 2008
347731 2216478  Error message when you run Get-Command in a PowerShell version 2 environment on a server that is running SQL Server 2008: "Object reference not set to an instance of an object"
351286 2216480  FIX: The margins are not retained in a report when the report is exported to Excel on a server that is running SQL Server 2008, and then you preview the report in Excel
351981 2216484  FIX: Error message when you export a report to Excel in Report Manager on a server that is running SQL Server 2008: "Unhandled managed exception: Type = System.ArgumentOutOfRangeException"
357416 2216485  FIX: Resource Monitor enters a non-yielding condition on a SQL Server 2008 server
390126 2216491  The SQL Dumper utility cannot generate error dump files on a SQL Server 2008 failover cluster on an Advanced/Enterprise Failover Cluster install
396527 980671  Correcting the lack of cryptographic salt variation on SQL Server sa login hash
402453 2019402  No mapping between account names and security IDs was done" error when adding a node to a SQL Server 2008 Failover Cluster
426753 2216499  You cannot enroll a SQL Server 2008 database instance into a SQL Server Utility
426753 2253128  You cannot deploy a Data-Tier application to a SQL Server 2008 database instance
427703 2216501  FIX: Multiserver jobs are not downloaded from a SQL Server 2000 in the MSX role on to a SQL Server 2008 in the TSX role
256736 2322209  FIX: Incorrect results if you use parentheses around SELECT in a statement that has a common table expression in SQL Server 2008
524642 2283072  FIX: A hotfix is available that enables SQL Server 2008 R2 Reporting Services to use a 2048-bit RSA key to encrypt a Reporting Services key for FIPS compliancy
370540 2448971  FIX:The locks of a table are not escalated when you insert many rows into a table in SQL Server 2008

Additional issues that are fixed in this service pack

Resolutions to the following issues are also included in SQL Server 2008 SP2.
VSTS bug numberDescription
270934 DBCC corruption errors occur during SQL Server 2008 collations when strings that have Jamo sequences that follow the Japanese prolonged sign are used.
271554 FIX: The syscolpars system base table displayed duplicate rows. This fix distinguishes them by adding the procedure number in DBCC checkcatalog.
271708 When query has a filter that references a column in the sys.dm_server_audit_status view, the filter is ignored, and all the rows are returned from sys.dm_server_audit_status.
280004 The asynchronous bucketing target shows incorrect value/string pairs for mapped fields when compared to the information that is available in dm_xe_map_values.
284760 FIX: The Mismatched view can contain multiple views that have different numbers of indexes. This fix forces a deterministic order of views to compare, based on matched functions and on the number of matched indexes.
285377 Users receive an "Access Denied" error message when they navigate to Report Manager locally if UAC is enabled. Adding an updated "Access Denied" message to mention UAC could allow users to address the setting accordingly.
291707 In a given database, using a server cursor that is declared for a select statement that contains any system view may give incorrect results after that database is attached and reattached.
293089 Radial Gauge users who try to use “RateOfChange” or “Integral” as the formula type for the MaximumValue or MinimumValue on the gauge do not see a value in the property grid dropdown list.
296314 FIX: The FailoverAnalysis.sql file is installed during a SQL Server Failover Cluster setup. This script will be called automatically when failure occurs. The SqlDMVScriptTimeout private property controls for how many milliseconds the script can take to finish.
302563 Users receive a validation error when they are upgrading Shared Data Sources that have a space in the name. The error indicates that the name cannot contain spaces.
302707 Installation of Service Pack 1 for SQL Server Business Intelligence Development Studio 2008 fails because of a missing configuration file.
304390 The ring buffer displays incomplete event data (events output from dm_xe_session_targets) when it reaches the size limit for the XML output.
307313 Users cannot return multiple cookies in their HTTP module. This affect users when they try implement custom authentication.
309869 The sqlsos.task_address, sqlsos.worker_address, and sqlsos.scheduler_address Extended Events actions display an incorrect address.
310483 Reporting Services 2008 Designer converts centimeters to inches in the Tablix. This causes a loss of layout precision.
313780 Uninstalling the first of two named AS instances disables the SQL Browser. This causes the second named instance to be unreachable.
319216 Users cannot use stored credentials in their data sources if a Smart Card is required.
322046 A query that has a subselect clause that uses a set expression that does not return any calculated members returns an error saying that subselects that have calculated members are not allowed.
327301 In Analysis Services, setting a dimension default member by using an MDX expression that has a NonEmpty function causes the server to return an internal error when it tries to open the cube.
328752 The sys.dm_os_wait_stats view displays duplicate rows for the MISCELLANEOUS wait type. This can cause problems for monitoring tools.
331630 Integer value gets incorrectly rounded in the SQL Server Reporting Services Reusable Query Designer.
333544 FIX: A new error code (6257) is returned when a user tries to register a CLR assembly that is built against a newer runtime, such as CLR v4. The previous error message (code 6218) was confusing and did not state the actual issue. See the bug for details.
336809 An edge case inserts an identity value outside the current range in "set identity_insert TABLE_NAME on" mode. This issue also occurs in backup/restore operations.
338247 If parameters that are bound by the SQLBindParameter function specify a length of SQL_DATA_AT_EXEC, the SQL Server 2008 Native Client ODC driver causes an access violation if the SQLPutData function specifies a length of SQL_NULL_DATA when the parameter value is supplied for execution.
338999 SQL Server Reporting Services 2008 experiences a connection leak when a user uses the SOAP interface to run and export a report.
347779 SQL Server sometimes generates a "There is insufficient system memory in resource pool 'internal' to run this query" error message. This situation is often preventable.
350670 Users receive a "Page Expired" error message when they try to refersh a rendered report after the report remains idle for one hour.
352788 Error 955 occurs when users tried to configure Management Data Warehouse if they have databases on their server that are not online.
352862 ReportBuilder 2.0 ClickOnce displays a blank report at startup if a user edits the report by using ReportBuilder 2.0 from a SharePoint 12 website.
355222 If MARS is used, the sys.dm_os_tasks view displays the same value in the request_id column for all tasks instead of reporting the correct value associated with each task.
355228 FIX: A new error message indicates the cause of an access violation that occurs if the bucketizer target parameters are set incorrectly.
356127 Incorrect column collation information is returned when BCP.exe retrieves column metadata information.
360584 Subscriptions fail during execution because of an in-memory data structure problem during the attempted email delivery of an MHTLML-rendered report.
368388 BIDS 2008 Report Designer running together with SQL Server Reporting Services 2008 crashes when users resize a table row.
376257 The TO, CC, and BCC fields of the "Report Delivery Options" dialog box are cleared after users cancel or close the "Schedule Details" dialog box to return to "Report Delivery Options."
383747 Collection sets work incorrectly when they collecting performance counters on a cluster if the SQL instance name is the same as the virtual network name.
384800 FIX: Improved and additional information is logged for InvalidReportParameterException in Verbose output if certain conditions occur during parameter processing.
393410 When users clear the selection of one or more months in a monthly schedule, SSRS creates separate schedules for each month that is still selecteded. In some cases, this can cause an "@active_end_date cannot be before @active_start_date" error.
393879 Data-Driven subscription fails if a NULL value is returned when it is mapped to the report parameter, which accepts a NULL value.
394977 Subscription and Delivery issues are difficult to diagnose in the Reporting Services trace file. More verbose logging is required.
415084 Users receive an "Index was out of range. Must be non-negative and less than the size of the collection" error message when they export to PDF.
419740 Users cannot use custom authentication in the SQL Server 2008 Web SKU.
424595 If a user creates a user-defined type and a table valued function in the same transaction, and if the function has a return table containing a column of the user-defined type, a self-deadlock occurs. Additionally, the connection that tries to perform these operations is terminated.
421922 FIX: Support is provided for a maximum of 15,000 partitions in tables and in indexes in the Enterprise, Developer, and Evaluation editions of Microsoft SQL Server 2008 SP2. For more information about how this support can be enabled and disabled, about its limitations, and about some known issues and workarounds, see the "Support for 15,000 Partitions" white paper (http://go.microsoft.com/fwlink/?LinkId=199773).
424804 In Analysis Services, some customers may experience access violation when they use a parent-child hierarchy in the condition of an IIF function.
426631 Using a PeriodsToDate function that has a Null member parameter may cause an "Unexpected exception" error.
429253 An incorrect version number is displayed in SQL Server Management Studio and in SQL Server Reporting Services. This causes confusion because the displayed number does not vary from the RTM version number after an upgrade is performed.
430703 A secondary exception may occur when Analysis Services generates a Watson dump. This is true whether the Watson dump is generated because of an actual exception or because of an explicit request to dump on certain errors. If the secondary exception occurs, the server can experience infinite recursion. This generates many dumps, and then shuts down the server.
431298 FIX: In SSRS 2008 Service Pack 2, the RS add-in for SharePoint 2007 for communicating with SSRS 2008 R2 report server is updated. This enables access to certain functionality in ReportBuilder 3.0, ReportParts, and ATOM Renderer, including the new Report preview in ReportBuilder 3.0.
431301 Processing objects in a database that contains thousands of metadata objects takes a long time to begin importing the data. This occurs because the server has to build a job schedule before the data import begins. Additionally, the algorithm that is used to determine dependencies causes slow performance when many metadata objects exist in the database.
432094 A spatial index cannot be safely created in a table that uses CELL_ID, CELL_ATTRIBUTES or SRID for column names.
448007 In certain conditions, the partitioned column name in error message 11405 may be wrong. This causes an incorrect column name to be printed.
455538 Using cell security together with calculations yields incorrect results when a NON EMPTY clause is used.
456534 Read access violation occurs during a commit process if two transactions run simultaneously: one to modify or create a partition, and one to process a dimension.
471363 When Microsoft Office 12 is integrated with SQL Server 2008 Reporting Services, the SharePoint integration does not default to the Report Builder clickonce URL, preventing Report Builder from starting.
471421 FIX: Localization issue in the Turkish SKU: bread crumbs are now localized on new reportpart library pages.
499162 FIX: An accessibility issue that occurs because the MSAA name of "Symbol" is null. MSAA names are updated and are now discoverable.
503096 FIX: Improved the performance of index creation by DDL statements for tables that have many partitions and that contain no data. Previously, the time required for this process increased in a geometric progression as the number of partitions increased. The time required now increases in a linear progression.
515240 Using table valued parameters for LOB columns may cause an access violation on the server.
520700 MSAA names for the Radial Scale Properties dialog box are incorrect and are not accessibility compliant.
525601 During a check for a granular audit, the algorithm that is used to list any audits that include the event may experience memory leaks of the list of audits. This continues until out-of-memory errors occur and the service has to be restarted to reclaim the memory.

Note If you are upgrading from SQL Server 2008 SP1Cumulative Update 9 or a later version, you must apply a post-SQL Server 2008 SP2 cumulative update after you upgrade to SQL Server 2008 SP2 to obtain all the fixes. 

For more information about the post-SQL Server 2008 SP2 Cumulative Update, click the following article number to view the article in the Microsoft Knowledge Base:
2289254 Cumulative update package 1 for SQL Server 2008 Service Pack 2

REFERENCES

For more information about how to determine the current SQL Server version and edition, click the following article number to view the article in the Microsoft Knowledge Base:
321185  How to identify your SQL Server version and edition
The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products.

APPLIES TO
Keywords: 
kbqfe kbfix kbsurveynew kbexpertiseadvanced KB2285068
posted by LifeisSimple
2011. 1. 10. 00:14 Brain Trainning/Error Log

대략 증상은 디스크는 멀쩡한듯 하나 OS에서 위와 같은 오류 메시지가 나타남.

Data MDF 파일을 찾지 못한다는 이야기와 함께. 


해결 방법은 일단 서비스 내리고 Reboot 

뭐 Reboot으로 해결한게 이번이 2번째인데 이 녀석이 만약 서비스에 중요한 영향을 미치는 녀석이었다면 큰 일이 생겼을듯...

Storage 부분 버그로 2번 유사 장애 + 2008로 업글 못하고 2005로 눌러 앉았음.. 


아래는 MSDN 에서 찾은 유사 에러 이런 유사 에러가 많이 발생하는 관계로 MS에서 Driver Update 를 권장하는 듯... 

------------------------------------------ 예 1 ------------------------------------------------------------------------------------------

출처 : http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/b690c6be-795e-43e6-9716-1ab40a780c4a

Q :

We are using sql server 2005 Enterprise Edition with service pack1

 

I got the following error messages in the SQL log

 

  1. The operating system returned error 21(The device is not ready.) to SQL Server during a read at offset 0x00000000090000 in file '....mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
  2. fcb::close-flush: Operating system error 21(The device is not ready.) encountered.

I got these errors for about 2 hrs and after that I see these messages in the sql log

 

  • Starting up database ' '
  • 1 transactions rolled forward in database ''  (). This is an informational message only. No user action is required.
  • 0 transactions rolled back in database ' ' (). This is an informational message only. No user action is required.
  • Recovery is writing a checkpoint in database  ' ' ( ). This is an informational message only. No user action is required.
  • CHECKDB for database  '' finished without errors on   (local time). This is an informational message only; no user action is required.

Can anyone please help me in troubleshooting this issue. Why this migh have happened.


A : 

This sounds like an IO subsystem issue, i.e. SQL Server is having difficulty talking to one of your drives. Are you using direct attached storage or a SAN?


Q : 

 We have Netapp SCSI disk storage. Could you please tell me how do I troubleshoot this so that I wont get such errors in future.


Netapp 스토리지를 사용중임. 대략 스토리지 부분에서 오류일 가능성 있슴.


------------------------------------------ 예 2 ------------------------------------------------------------------------------------------

출처 : http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/27ef2342-d8d0-42f0-807a-41a15dd1856f

Q :

Dear DBAs,

A database backup job has failed for the last couple of days with below error. I see that there is no P drive for PP1DATA. When I tried to bring drives online to find P drive and bring it up I couldn't bring them online. Is the absence of P drive related to this error?

Thank you,

 

." failed with the following error: "The operating system returned error 21(The device is not ready.) to SQL Server during a write at offset 0x0000005ec32000 in file 'P:\PP1DATA1\PP1DATA1.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.  BACKUP DATABASE is terminating abnormally."


A :

Sort of.  I misunderstood what the P drive was, sorry about that.

It seems that the backup failed because it could not find the MDF for the database it was trying to back up.  I am actually surprised that your instance was running at all.  Are your system databases on a different drive?

I think that we're having a problem with some definitions here.  A 'shared drive' is not really the appropriate terminology.  In order for a SQL cluster to use a drive, that drive must be known to the cluster as a cluster resource.  Indeed, if the P drive failed over to the other node, then it would HAVE to be a cluster resource.  The easiest way to determine which drives are cluster resources is to look into the Cluster Administrator.  It will tell you which resources are active on which nodes. You can get to it via the administrative tools menu.  If you do not know the cluster name, you can simply use a period '.' if you are on the cluster when you run the Cluster Administrator.

It is important to understand that a SQL failover cluster is designed to recover from a failure at the server level.  This specifically excludes storage.  If your storage medium fails, no amount of clustering will save you.

As I mentioned earlier, you need to make sure that the SQL virtual server and all related drive resources are on the same node in order for your server to work as expected.  Correctly set dependencies will help with that


------------------------------------------------ 이건 참고 자료... -------------------------------------------------------------

출처 : http://support.microsoft.com/kb/304261

대략 내용은 드라이버 잘 골라써라 지원 잘되는 녀석으로 골라써라 이런 내용이네요 흠

SQL Server의 네트워크 데이터베이스 파일 지원에 대한 설명

이 문서는 이전에 다음 ID로 출판되었음: KR304261

이 페이지에서

요약

SQL Server의 성능과 안정성을 최적화할 수 있도록 Microsoft SQL Server 데이터베이스 파일의 저장소로 SAN(저장 영역 네트워크)이나 로컬로 연결된 디스크를 사용하도록 구성하는 것이 좋습니다. 기본적으로 네트워크 기반 서버나 NAS(네트워크 연결 저장소)에 저장된 네트워크 데이터베이스 파일은 SQL Server에서 사용할 수 없습니다.

그러나 네트워크 기반 서버나 NAS 저장소 서버에 데이터베이스를 저장하도록 SQL Server를 구성할 수 있습니다. 이러한 목적으로 사용되는 서버는 이 문서의 "추가 정보" 절에 자세히 설명되어 있는 데이터 쓰기 순서 지정 및 쓰기(write-through) 보증에 대한 SQL Server 요구 사항을 충족해야 합니다.

WHQL(Windows Hardware Quality Lab) 인증 장치

WHQL(Windows Hardware Quality Lab)에서 인증한 Microsoft Windows 서버 및 네트워크 기반 서버 또는 NAS 저장소 서버는 SQL Server 저장 장치를 지원하는 데 필요한 데이터 쓰기 순서 지정 및 쓰기(write-through) 보증을 자동으로 충족합니다. Microsoft는 이러한 구성에서 응용 프로그램과 저장소 관련 문제를 모두 지원합니다.

기타 장치

이 문서에 설명되어 있는 트랜잭션 데이터베이스 사용을 위해 I/O 보증을 지원하는 비WHQL 인증 저장 장치를 SQL Server에서 사용하는 경우 Microsoft는 SQL Server와 SQL Server 기반 응용 프로그램을 완전히 지원합니다. 그러나 장치나 장치의 저장소 하위 시스템과 관련된 문제나 이로 인해 발생되는 문제는 해당 장치 제조업체에 문의해야 합니다. 이 문서에 설명되어 있는 트랜잭션 데이터베이스 사용을 위해 I/O 보증을 지원하지 않는 비WHQL 인증 저장 장치를 사용하는 경우 Microsoft는 SQL Server나 SQL Server 기반 응용 프로그램을 지원할 수 없습니다. 비WHQL 인증 저장 장치가 이 문서에 설명되어 있는 트랜잭션 데이터베이스 사용을 위해 I/O 보증을 지원하는지 여부와 데이터베이스 사용을 위해 설계되었는지 여부를 확인하려면 해당 장치 공급업체에 문의하십시오. 또한 트랜잭션 데이터베이스 사용을 위해 장치가 올바르게 배포되고 구성되었는지 여부도 장치 공급업체에 문의하십시오.

추가 정보

기본적으로 네트워크 파일 공유에 SQL Server 데이터베이스를 만들 수 없습니다. 매핑된 네트워크 또는 UNC 네트워크 위치에 데이터베이스 파일을 만들려고 하면 다음과 같은 오류 메시지가 나타날 수 있습니다. 

메시지 1
5105 "장치 활성화 오류입니다."
메시지 2
5110 "'file_name' 파일이 데이터베이스 파일을 지원하지 않는 네트워크 장치에 있습니다."
이것은 예상된 동작입니다. 추적 플래그 1807은 오류를 무시하고 네트워크 기반 데이터베이스 파일로 SQL Server를 구성할 수 있도록 해줍니다. SQL Server와 대부분의 다른 엔터프라이즈 데이터베이스 시스템은 트랜잭션 로그와 관련 복구 논리를 사용하여 시스템 오류가 발생하거나 시스템이 예상치 않게 종료될 경우 트랜잭션 데이터베이스가 일관된 상태로 유지되도록 합니다. 운영 체제 I/O(입/출력) 쓰기 요청이 데이터베이스 관리자로 반환되면 복구 시스템에서 쓰기가 실제로 완료되거나 쓰기 완료를 보증할 수 있도록 이러한 복구 프로토콜은 디스크 미디어에 직접 쓰는 기능을 사용합니다. 시스템 오류가 발생할 경우 이 프로토콜을 사용하는 소프트웨어나 하드웨어 구성 요소의 오류로 인해 데이터의 일부 또는 전체가 손실되거나 손상될 수 있습니다. SQL Server의 이러한 로깅 및 복구 프로토콜 현상에 대한 자세한 내용은 Microsoft 기술 자료의 다음 문서를 참조하십시오.
230785  SQL Server 7.0, SQL Server 2000 및 SQL Server 2005의 로깅 및 데이터 저장소 알고리즘으로 인해 데이터 안정성이 높아진다
Microsoft는 이러한 연속 쓰기 및 쓰기 순서 지정 요구 사항을 충족하지 않는 NAS나 네트워크 기반 저장소 서버에서 SQL Server 네트워크 기반 데이터베이스 파일을 지원하지 않습니다.

네트워크 파일 공유를 사용하여 데이터베이스를 저장할 경우 네트워크 오류로 인해 데이터베이스 무결성이 손상되거나 성능이 저하될 수 있으므로 데이터베이스 파일을 로컬 디스크 하위 시스템이나 SAN(저장 영역 네트워크)에 저장하는 것이 좋습니다.

NAS(네트워크 연결 저장소) 시스템은 클라이언트가 TCP/IP 같은 네트워크 프로토콜을 사용하여 네트워크 리디렉터를 통해 연결하는 파일 기반 저장소 시스템입니다. 디스크 리소스에 액세스하기 위해 공유를 매핑해야 하거나 디스크 리소스가 네트워크에 UNC 경로(예: \\Servername\Sharename)를 통한 원격 서버로 나타나면 기본적으로 SQL Server 데이터베이스의 위치로 디스크 저장소 시스템을 사용할 수 없습니다.

성능 문제

SQL Server는 다른 엔터프라이즈 데이터베이스 시스템과 마찬가지로 I/O 하위 시스템에 많은 양의 데이터를 로드할 수 있습니다. 대부분의 대용량 데이터베이스 응용 프로그램에서 실제 I/O 구성과 튜닝은 전체 시스템 성능에 중요한 역할을 합니다. 다음 세 가지 주요 I/O 성능 요인을 고려하십시오.
  • I/O 대역폭: 데이터베이스 장치에 대해 유지할 수 있는 집계 대역폭으로, 일반적으로 초당 MB로 측정됩니다.
  • I/O 대기 시간: 데이터베이스 시스템의 I/O 요청과 I/O 요청의 완료 시점 사이의 대기 시간으로, 일반적으로 밀리초로 측정됩니다.
  • CPU 비용: 데이터베이스 시스템이 단일 I/O를 완료하는 데 필요한 호스트 CPU 비용으로, 일반적으로 CPU 마이크로초로 측정됩니다.
이러한 I/O 요인으로 인해 병목 현상이 발생할 수 있으므로 데이터베이스 응용 프로그램을 위한 I/O 시스템을 설계할 때는 이러한 요인을 고려해야 합니다.

가장 간단한 형태로 NAS 솔루션은 표준 네트워크 리디렉터 소프트웨어 스택, 표준 NIC(네트워크 인터페이스 카드) 및 표준 이더넷 구성 요소를 사용합니다. 이 구성의 단점은 모든 파일 I/O가 네트워크 스택을 통해 처리되고 네트워크 자체의 대역폭이 제한될 수 있다는 점입니다. 이로 인해 특히 SQL Server와 같이 매우 높은 수준의 파일 I/O를 필요로 하는 프로그램에서 성능과 데이터 안정성 문제가 발생할 수 있습니다. Microsoft에서 테스트한 일부 NAS 구성에서 이 I/O 처리량은 같은 서버에 있는 직접 연결된 저장소 솔루션 처리량의 약 1/3 정도였습니다. 이러한 구성에서는 NAS 장치를 통해 I/O를 완료하는 CPU 비용이 로컬 I/O CPU 비용의 약 두 배가 됩니다. NAS 장치와 네트워크 인프라가 발전하면서 이러한 비율도 직접 연결된 저장소나 SAN과 비례해서 향상될 수도 있습니다. 또한 응용 프로그램 데이터의 대부분이 데이터베이스 버퍼 풀에 캐시되고 I/O 병목 현상이 발생하지 않을 경우 NAS 기반 시스템의 성능이 사용 중인 응용 프로그램에 적합하다고 할 수 있습니다.

백업 및 복원 고려 사항

SQL Server는 백업용으로 VDI(가상 장치 인터페이스)를 제공합니다. 가상 장치 인터페이스는 백업 소프트웨어 공급업체에 핫 백업 수행과 SQL Server 데이터베이스 복원을 위해 성능이 좋고 확장이 용이한 매우 안정적인 방법을 제공합니다.

백업 소프트웨어는 NAS와 관련된 특별 지원 없이 VDI를 통해 NAS 장치에 저장되어 있는 데이터베이스 파일에서 작동합니다. 그러나 이로 인해 백업과 복원을 수행하는 동안 많은 양의 추가 네트워크 트래픽이 발생할 수 있습니다. VDI를 통해 백업하는 동안 SQL Server는 원격으로 파일을 읽고 SQL Server 컴퓨터에서 실행 중인 타사 백업 소프트웨어에 데이터를 전달합니다. 복원도 이와 유사합니다.

추가 네트워크 오버헤드를 피하려면 백업 공급업체에서 백업 공급업체와 NAS 공급업체별로 NAS 관련 지원을 제공해야 합니다. SQL Server VDI를 사용하면 백업 소프트웨어에서 NAS 장치가 지원하는 하드웨어인 분할 미러(split-mirror) 또는 소프트웨어인 쓰기 중 복사(copy-on-write) 기술을 통해 NAS의 로컬 데이터베이스 파일을 신속히 복사할 수 있습니다. 이러한 기술을 사용하면 네트워크를 통해 백업 파일을 복사하는 오버헤드를 피하고 복원 횟수를 크게 줄일 수도 있습니다.

NAS에 저장된 백업은 NAS에 저장된 데이터베이스 파일과 동일한 오류에 취약하므로 이러한 백업을 대체 미디어에 복사하여 보호하는 것이 좋습니다.

주의 SQL Server VDI 지원 없이 NAS 백업 기술을 사용하면 백업에서 데이터베이스 손상이 발생할 수 있습니다. 예를 들어, 페이지가 조각나거나 로그 파일과 데이터 파일을 별도의 장치에 저장한 경우 이 두 파일이 일치하지 않을 수 있습니다. 데이터베이스를 복원하고 손상된 데이터에 액세스할 때까지 SQL Server에서 이러한 조각난 페이지나 불일치를 검색하지 못할 수 있습니다. Microsoft는 SQL Server와 통합되지 않는 NAS 백업 기술의 사용을 지원하지 않습니다.

SQL Server VDI의 백업 및 NAS 공급업체 지원은 다양합니다. VDI 지원에 대한 자세한 내용은 NAS 및 백업 소프트웨어 공급업체에 문의하십시오.

NAS 솔루션을 SQL Server 데이터베이스용으로 배포하려면 엔드 투 엔드 솔루션 설계가 데이터베이스 사용에 적합한지 해당 NAS 공급업체에 문의해야 합니다. 대부분의 NAS 공급업체는 이러한 목적으로 최상의 사용 안내와 인증된 구성을 마련해 놓고 있습니다. 또한 앞서 언급한 I/O 요인으로 인해 응용 프로그램에서 병목 현상이 발생하지 않도록 I/O 성능을 벤치마킹하는 것이 좋습니다. 

다음은 추적 플래그 1807이 있거나 없는 Microsoft SQL Server 2005, Microsoft SQL Server 2000 및 Microsoft SQL Server 7.0의 네트워크 기반 데이터베이스 파일의 동작에 대한 설명입니다. 매핑된 구문은 NET USE 명령에 의해 네트워크 경로와 연결된 드라이브 문자를 나타냅니다. UNC 구문은 \\Servername\Sharename 같은 네트워크 경로에 대한 직접 참조를 나타냅니다.
  • 추적 플래그 1807이 없는 SQL Server 7.0에서 DISK INIT 이전 버전과 호환되는 구문 다음에 CREATE DATABASE 문을 매핑된 구문이나 UNC 구문에 사용할 경우 오류 5105가 발생합니다.
  • 추적 플래그 1807이 있는 SQL Server 7.0에서 DISK INIT 이전 버전과 호환되는 구문 다음에 CREATE DATABASE 문을 매핑된 구문에 사용할 경우 파일이 성공적으로 만들어집니다. UNC 구문에 DISK INIT를 사용하면 오류 5105가 발생합니다.
  • 추적 플래그 1807이 없는 SQL Server 2005, SQL Server 2000 또는 SQL Server 7.0에서 CREATE DATABASE 문을 매핑된 구문이나 UNC 구문으로 실행하면 SQL Server 7.0의 경우 오류 5105가 발생하고 SQL Server 2000의 경우 오류 5110이 발생합니다.
  • 추적 플래그 1807이 있는 SQL Server 2005, SQL Server 2000 또는 SQL Server 7.0에서 매핑된 구문이나 UNC 구문으로 수행한 CREATE DATABASE 문은 성공합니다.
SQL Server는 SQL Server의 비장애 조치 클러스터 설치에 추적 플래그 1807을 사용하는 네트워크 기반 파일만 지원합니다. SQL Server 2005 및 SQL Server 2000에서는 MSCS(Microsoft Cluster Service) 클러스터 관리자가 저장 장치를 인식하고 등록해야 하므로 SQL Server의 장애 조치 클러스터 설치는 네트워크 기반 파일과 함께 사용할 수 없습니다.

추가 정보

NAS 제품에서 데이터베이스 소프트웨어를 잘못 사용하거나 잘못 구성된 NAS 제품에서 데이터베이스를 사용하면 전체 데이터베이스 손실 같은 데이터 손실이 발생할 수 있습니다. NAS 장치나 네트워크 소프트웨어에 쓰기 순서 지정이나 쓰기(write-through) 같은 데이터 보증이 완전히 적용되지 않을 경우 하드웨어, 소프트웨어 또는 전원 오류로 인해 데이터 무결성이 심각하게 손상될 수 있습니다.

참조

SQL Server의 쓰기 순서 지정이나 쓰기(write-through)에 대한 자세한 내용은 Microsoft 기술 자료의 다음 문서를 참조하십시오.
234656  INF: SQL Server에서 디스크 드라이브 캐싱 사용
SQL Server 온라인 설명서 항목: "추적 플래그"

'Brain Trainning > Error Log' 카테고리의 다른 글

SQL Server 2008의 syspolicy_purge_history Job 에러  (0) 2010.12.22
posted by LifeisSimple
2010. 12. 29. 10:55 Brain Trainning/DataBase

출처 : http://www.sqlservercentral.com/articles/Partitioning/71657/

Automate Sliding Window Partition Maintenance: Part III

By Hugh Scott, 2010/12/28

Total article views: 1240 | Views in the last 30 days: 1240

Introduction

This series of articles is designed to demonstrate in a practical manner how to implement partitioning in SQL Server. In particular the focus is on designing and maintaining a "sliding window" for the purpose of purging old data from the database with minimal performance overhead.

Fundamentally, there are two administrative tasks needed to maintain a sliding window for data partitioning:

  1. You have to create new partitions to store new data; and
  2. You have to drop data from the trailing partition.

Each process poses its own unique challenges. In this article, I will review the functionality of MergePartition.ps1, a script that encapsulates and automates the following tasks:

  1. Finds the tables associated with the partition scheme (set as a parameter),
  2. Creates copies of the tables (staging tables),
  3. Splits the data from the targeted partition boundary into the staging tables,
  4. (optionally) drops the staging tables,
  5. (optionally) merges the partition boundary; and
  6. (optionally) drops the file group with the associated data files

As noted previously, a huge chunk of this was adapted from Stuart Ozer's PartitionManager C# class library.

Running the Script

With the database set up (with the script from the first article) and the MergePartition.ps1 script loaded, now we run a couple of statements. You are ready to start PowerShell. Note that unless you have the ability to sign code, you will need to set PowerShell's execution policy to Unrestricted in order to be able to run this script. To do so, enter Set-ExecutionPolicy Unrestricted.

Before running the script, open a query window and run:

SELECT * FROM partition_info ORDER BY TableName, partitionNumber

Review the results to see what partitions exist and how much data is in each partition.

First, let's run the script with the minimum possible number of parameters:

PS C:\> ./MergePartition.ps1 <server_name> PartitionMaintTest ps_FACT_DATA_DATE

This will result in removing partitionNumber 2 (note that the other partitions will be renumbered sequentially; you will only know that the correct partition has been removed by verifying that the lower boundary date has changed.

Note that the data in the FACT_EMPTY partition is not affected. You can run the above statement as many times as you like (well, up to the limit of the number of partitions that were created): the result will be that partition number 2 will be removed. The data in FACT_EMPTY will remain untouched.

Next, you can specify a specific boundary date to remove.

PS C:\> ./MergePartition.ps1 <server_name> PartitionMaintTest ps_FACT_DATA_DATE "08/01/2009"

Note that running the script in this manner may result in a "gap" appearing in your data. You will remove a month of data from the middle of your table (not that I speak from experience!).

Other Considerations

The major challenge in maintaining the sliding window partition scheme has been working with the developers to ensure that new fact tables (there haven't been two) and changes to existing fact tables (there have been a few) have been consistent with the established partition scheme. In particular new indexes have to be carefully designed and must respect the storage alignment requirement.

Beyond that, up until these most recent changes, I had to execute the scripts manually once per month. This was partly because as a new process, I did not fully trust automating it until I had seen it run successfully in production a few times. It was also because I needed to update the scripts to automatically calculate boundary values and add some additional error traps and checks.

Business Results

So, at the end of the day, what's the value proposition to the business for all this?

In a nutshell, for me, it boils down to consistent, reliable performance and ease of administration. These are trivial examples with small amounts of data. However, when running against our data warehouse, the MergePartition.ps1 script removes millions of records, representing a months worth of activity around the country in just about the same amount of time (a very few seconds).

By contrast, our old data mart contains data going back more than 7 years. Performance has steadily declined over the years as more and more data accumulates. It is a vendor database and we are not permitted to make changes to the schema (and we are just about to retire it anyway). The application does not contain a purge function (but we can get one custom written for us for *small* fee!!!). In short, we're stuck.

Now, we have an agreed upon strategy for purging data from the data mart, and the process takes only seconds to complete. The amount of data in the database remains relatively constant and our users benefit from better performance for their reports.

The Series

You can read all the articles in this series:

Resources:

PartitionDBSetup.sql | MergePartition.ps1

By Hugh Scott, 2010/12/28

Total article views: 1240 | Views in the last 30 days: 1240 

'Brain Trainning > DataBase' 카테고리의 다른 글

SQL Server 2008 서비스팩2에 Fix된 버그  (0) 2011.01.13
SQL Server 2008 Service Pack Patch  (0) 2011.01.10
SQL Sentry Plan Explorer v1.1  (0) 2010.12.27
성능카운터 저장하기  (0) 2010.12.22
형변환 우선순위  (0) 2010.12.17
posted by LifeisSimple
2010. 12. 21. 19:13 Brain Trainning/PRG Language

PHP에서 Microsoft SQL 서버를 사용하기 위한 방법입니다. 편의상 존칭은 생략합니다. 


PHP에서 ODBC, Microsoft SQL, SyBase 등을 사용할때에 mssql 익스텐션을 컴파일 하기 위해서는 Free TDS 라이블러리가 필요하다.

이때 사용되는 프로토콜은 Microsoft SQL 4.2, 7.0, 8.0 프로토콜을 지원하고 있다. 

설치 방법은 아래와 같다. 

사이트 : http://www.freetds.org
다운로드 : FreeTDS 0.82

다운로드를 적당한 디렉토리 (ex: /usr/local/src) 에 wget등을 이용해서 다운로드 받는다.
(편의상 root로 작업)

Free TDS 라이블러리 설치 
# tar zxvf freetds-stable.tgz
# cd freetds-0.82
./configure (기본으로 /usr/local 에 라이블러리가 설치됨)
# make
make install
ldconfig (vi /etc/ld.so.conf 파일에서 /usr/local/lib 를 추가해준다)

PHP mssql 익스텐션 컴파일
PHP 설치 때 압축을 해제 하였던 디렉토리로 이동한다. 
# cd /usr/local/src/php-5.x.x
cd ext/mssql
phpize (configure 파일 생성)
configure --with-mssql=/usr/local
make
cp .libs/mssql.so /(PHP 익스텐션이 들어 있는 디렉토리*)
* PHP 익스텐션 디렉토리는 php.ini 파일내의 extension_dir 에서 지시한 디렉토리이다. 일반적으로 "./" 으로 설정이 되어 있으나 직접 설정을 했을 경우 다른 디렉토리가 설정이 되어 있을 수 있다. 

vi /usr/local/lib/php.ini 
extension=mssql.so 를 추가해준다. 
php.ini  파일이 없을 때는 /usr/local/src/(PHP소스디렉토리)/php.ini-dist 파일을 /usr/local/lib 에 복사 해놓으면 된다. 하지만 PHP 컴파일시에 ini 디렉토리를 다른게 설정했다면 그 해당 디렉토리에 복사를 해주면 된다. 

php -m | grep mssql (Apache 를 기동하기 전에 mssql 익스텐션이 올라왔나 확인한다.)
/etc/init.d/httpd restart 또는 /usr/local/apache/bin/apachectl restart 

PHP info 를 확인한다. 
사용자 삽입 이미지

위와 같이 MSSQL 익스텐션이 설정이 되어 있다면 정상적으로 연동이 된것이다 . 
하지만 freetds 설정을 해야 정상적으로 사용이 가능한데 이 부분은 다음에 다루기로 하겠다.



php.ini에서 MS-SQL 연동  
extension_dir = c:/php5/extensions/ 
extension=php_mssql.dll 

$db=mssql_connect("localhost","sa","xxxg"); 
mssql_select_db("suhoi", $db); 

# tar xvfpz freetds-stable.tgz 
# cd  freetds-0.62.4/ 
# ./configure --with-tdsver=8.0 --enable-msdblib --enable-dbmfix --with-gnu-ld --enable-shared  
--enable-static --prefix=/usr/local/freetds 
# make 
# make install 


./usr/local/freetds/etc/freetds.conf 에 다음을 추가한다 

host = 192.168.1.34  
port = 1433  
tds version = 4.2  

0.52로 올라오면서 그전에 쓰던 interfaces 파일은 암 쓸모가 없어진거 같습니다. 걍 interfaces 파일에다 예전처럼 했다가 3번을 다시 까는등 엄청 고생했씀다. 꼭 이 파일을 이용하세요..  

글구 보면 tds version = 4.2 로 되어 있습니다.  
이거 꼭 확인하시구요....  
여기다 7.0을 적구 난중에 php 소스상에서  
putenv("TDSVER=42");  
일케 해두 됩니다.. 맘에 드는걸루 하세염  

PHP configure 
#./configure --with-mssql=/usr/local/freetds --생략(기존의 PHP 옵션을 준다) 
PHP, apache 설치 


======================================================= 
db_con.php 
======================================================== 

$host="123.123.123.123";    // 접속할 서버 아이피 
$user="sa";                     // 접속 아이디 
$password="!%jell001";    // 접속 패스워드 
$dataname="jeil";            // 접속 디비명 

putenv("MSSQL=/usr/local/freetds");  
putenv("TDSVER=42");  
$db_ms=mssql_connect($host,$user,$password); 
mssql_select_db($dataname, $db_ms)  or die ("DB접속에 실패하였습니다.")

'Brain Trainning > PRG Language' 카테고리의 다른 글

[MSSQL] JDBC 연결  (0) 2011.02.25
[MongoDB] Comparison of data serialization formats  (0) 2011.01.18
Connection String 세팅  (0) 2010.12.03
Programming Windows Phone 7, by Charles Petzold  (0) 2010.10.29
WidowsClient.Net  (0) 2010.05.12
posted by LifeisSimple
prev 1 ··· 4 5 6 7 8 9 next