블로그 이미지
LifeisSimple

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

Notice

2010. 7. 14. 14:03 Brain Trainning/DataBase

'Brain Trainning > DataBase' 카테고리의 다른 글

SQL Server 2005 DBCC Command Quick Reference  (0) 2010.07.16
Top 10 SQL Server 2008 Features for ISV Applications  (1) 2010.07.14
SQL Patch 정보  (0) 2010.06.18
MSSql 2005 SP 적용  (0) 2010.06.18
DBA 일간 체크리스트  (0) 2010.06.03
posted by LifeisSimple
2010. 6. 27. 11:54 Photograph by ../일상

HP Mini 5101(ENERGY STAR)

작은 제품 이미지
»


 
(예: Laserjet CP 6015de)

 

드라이버 및 지원 알림 신청

지금 신청하여 사용자 정의 드라이버, 보안, 패치 및 지원 이메일 알림 서비스를 받으십시오. 지정한 제품 또는 보유한 제품을 업데이트하려는 경우 해당 업데이트만 받습니다.

사용 중인 제품의 운영 체제를 선택합니다.

 

 운영 체제  한국어

» Linux
» Microsoft Windows 7 Enterprise (32-bit)
» Microsoft Windows 7 Home Basic (32-bit)
» Microsoft Windows 7 Home Premium (32-bit)
» Microsoft Windows 7 Professional (32-bit)
» Microsoft Windows 7 Starter (32-bit)
» Microsoft Windows 7 Ultimate (32-bit)
» Microsoft Windows Vista Business (32-bit)
» Microsoft Windows Vista Enterprise (32-bit)
» Microsoft Windows Vista Home Premium (32-bit)
» Microsoft Windows Vista Ultimate (32-bit)
» Microsoft Windows XP Home Edition
» Microsoft Windows XP Professional

'Photograph by .. > 일상' 카테고리의 다른 글

파주 헤이리 (휴식)  (1) 2010.08.09
비오는날  (0) 2010.08.09
HP mini 5101 Driver  (0) 2010.06.27
아이팟에서도 블로깅이  (1) 2010.06.23
프라모델... 건담  (0) 2010.03.03
posted by LifeisSimple
2010. 6. 27. 11:52 Photograph by ../일상
링크는 요기

http://h20566.www2.hp.com/portal/site/hpsc/public/psi/swdHome/?lc=ko&lang=ko&lang=ko&sp4ts.oid=3973866&cc=kr&cc=kr&ac.admitted=1277606469253.876444892.199480143

HP Mini 5101

작은 제품 이미지
»


 
(예: Laserjet CP 6015de)

 

드라이버 및 지원 알림 신청

지금 신청하여 사용자 정의 드라이버, 보안, 패치 및 지원 이메일 알림 서비스를 받으십시오. 지정한 제품 또는 보유한 제품을 업데이트하려는 경우 해당 업데이트만 받습니다.

사용 중인 제품의 운영 체제를 선택합니다.

 

 운영 체제  한국어

» Linux
» Microsoft Windows 7 Enterprise (32-bit)
» Microsoft Windows 7 Home Basic (32-bit)
» Microsoft Windows 7 Home Premium (32-bit)
» Microsoft Windows 7 Professional (32-bit)
» Microsoft Windows 7 Starter (32-bit)
» Microsoft Windows 7 Ultimate (32-bit)
» Microsoft Windows Vista Business (32-bit)
» Microsoft Windows Vista Enterprise (32-bit)
» Microsoft Windows Vista Home Premium (32-bit)
» Microsoft Windows Vista Ultimate (32-bit)
» Microsoft Windows XP Home Edition
» Microsoft Windows XP Professional

'Photograph by .. > 일상' 카테고리의 다른 글

비오는날  (0) 2010.08.09
HP mini 5101 Driver Energystar  (0) 2010.06.27
아이팟에서도 블로깅이  (1) 2010.06.23
프라모델... 건담  (0) 2010.03.03
달려라 썰매~  (0) 2010.03.03
posted by LifeisSimple
2010. 6. 23. 00:12 Photograph by ../일상


iPod 에서 작성된 글입니다.

'Photograph by .. > 일상' 카테고리의 다른 글

HP mini 5101 Driver Energystar  (0) 2010.06.27
HP mini 5101 Driver  (0) 2010.06.27
프라모델... 건담  (0) 2010.03.03
달려라 썰매~  (0) 2010.03.03
뭔가를 기다리는...  (0) 2010.03.03
posted by LifeisSimple
2010. 6. 18. 11:14 Brain Trainning/DataBase
요기를 참고해 주세요.

http://www.sqlsecurity.com/FAQs/SQLServerVersionDatabase/tabid/63/Default.aspx


된장 패치가 무슨 애들 장난이냐...
워드치는거냐
posted by LifeisSimple
2010. 6. 18. 00:05 Brain Trainning/DataBase
SQL Server SP 적용할때 언제 서비스가 중단될까요? 
지금까지 크게 신경 안썻던 부분인데 음.. 요즘 이리 저리 많이 신경쓰게 되고 많은 관심을 가지게 되네요... 

대략 서비스 중단 시점은... 쭈욱~ 이 아니라 다음과 같습니다. 

1. Database Service 업데이트 하는 시점 : 대략 10분 정도
2. SQL Server Native Client 업데이트 시점 : 대략 5분 정도

이정도 서비스 중단이 되더군요... 

그리고, 마지막으로 Reboot 하는 시점입니다. ㅎ

그리 오래 되는건 아니더군요... 참고해야 할 것 같습니다. 



'Brain Trainning > DataBase' 카테고리의 다른 글

SQL Server 2008 T-SQL Enhancements  (0) 2010.07.14
SQL Patch 정보  (0) 2010.06.18
DBA 일간 체크리스트  (0) 2010.06.03
Optimising Server-Side Paging - Part II (펌)  (0) 2010.05.24
Replication Across Non-Trusted Domains (펌)  (0) 2010.05.17
posted by LifeisSimple
2010. 6. 7. 15:21 Brain Trainning
Shift-Space로 한/영 전환을 하는게 편해서 이리저리 찾아서 받은 데이터입니다. 

더블클릭해서 적용해 주면 됩니다. 

어디선가 펌질한 건데 어딘지 잘 기억이 없네요... 

'Brain Trainning' 카테고리의 다른 글

[MAC] Keynote 사용설명서....  (0) 2012.02.23
MS 워드를 활용한 블로그 글올리기  (1) 2011.01.09
가상화 관련 링크  (0) 2010.11.18
MS 2009 PDC 자료  (0) 2010.07.15
각 나라별 사용 IP대역  (0) 2010.05.06
posted by LifeisSimple
2010. 6. 3. 12:02 Brain Trainning/DataBase
1. Make sure each of your production databases are available and accessible
to their users.
2. Check all processes - such as clustering, replication, service
broker, log shipping, database mirroring - to verify that they are running correctly.
3. Check OS, SQL Server, Security, and other logs for unusual events.
4. Verify that all scheduled jobs have run successfully, including checking for long running or "hung" jobs.
5. Confirm that backups have been made and successfully saved to a secure
location.
6. Monitor disk space to ensure your SQL Servers won't run out of disk space. For best performance, all disks should have 15% or more of free space.
7. Throughout the day, periodically monitor SQL Server performance using Performance Monitor, Profiler/SQL Trace, or other performance monitoring
tools.
8. Regularly monitor and identify blocking issues.
9. Keep a log of any changes you make to servers, including documentation
of any performance issues you identify and correct.
10. Regularly restore backups to a test server in order to verify that you can restore them. You don't need to restore all backups every day, but do so often to ensure that you are confident you have good backups.
11. Take some time to learn something new as a DBA to further your professional development.
12. Automate as many of these day-to-day tasks as possible.

This list of best practices is not intended to be complete, but to get you started on your way to becoming an exceptional DBA. Always keep in mind that these suggestions are generic, and may or may not apply to your specific situation.
Brad McGehee's
Day-to-Day DBA Best Practices

출처: RedGate

'Brain Trainning > DataBase' 카테고리의 다른 글

SQL Patch 정보  (0) 2010.06.18
MSSql 2005 SP 적용  (0) 2010.06.18
Optimising Server-Side Paging - Part II (펌)  (0) 2010.05.24
Replication Across Non-Trusted Domains (펌)  (0) 2010.05.17
Optimising Server-Side Paging (펌)  (0) 2010.05.04
posted by LifeisSimple
2010. 5. 24. 17:41 Brain Trainning/DataBase

Optimising Server-Side Paging - Part II

Introduction

In part I of this series, we looked at an optimisation that is useful when paging through a wide data set.

This part examines four methods commonly employed to return the total number of rows available, at the same time as returning the single page of rows needed for display.

Each tested method works correctly; the focus of this article is to identify the performance characteristics of each method, and explore the reasons for those differences.

Sample Data

This part uses a single table containing one million rows of meteorological data collected at an imaginary weather station.

The code to create a test database, load the sample data, and run the full test suite is included in the Resources section at the end of this article.

Tested Methods

The Count Over Method

The first tested method uses the OVER clause extension to the COUNT aggregate. The syntax needed to count all the rows in the query (not just those on the requested page) is very simple:

The Double Row Number Method

The second method uses two ROW_NUMBER functions to determine the total number of rows, using a technique described in this SQL Server Central article by Robert Cary.

The basic idea is to number the rows in the whole set twice: once in ascending order, and once in descending order. It turns out that the sum of these two numbers (in every row) equals the count of rows in the whole set, plus one. This neat trick can be accomplished with the following code:

The Sub-query Method

The third idea is to use a simple COUNT sub-query, which duplicates the conditions in the main paging query.

The Indexed View Method

The last of the four methods uses an indexed view that contains an aggregated record count per day. The total record count is calculated by summing daily record count subtotals.

Using the view to compute the record count:

Test Results

Each test uses the same basic paging mechanism described in part I of this series, with a small section of code added to count the overall total number of rows. The test query includes all one million test rows in the paged data set.

The tests were run on a single-core machine running SQL Server 2008 Developer Edition, version 10.0.2766. The code has also been tested on SQL Server Developer Edition, version 9.0.4285.

All system caches were cleared before each test run, and the SQL Server read-ahead mechanism was disabled.

Each test use the same million-row data set, using 25 rows per page. Three tests were run using each method, to return data from the first, last, and middle pages of the set.

The data concerning physical reads, logical reads, CPU time, and elapsed time were obtained from the sys.dm_exec_query_stats dynamic management view, and validated against Profiler output. Buffer pool usage was determined from sys.dm_os_buffer_descriptors. Memory grants were obtained from actual execution plans generated on separate runs.

For each performance category in the summary tables below, the best results are shown in green, and the worst in orange.

First page

Middle page

Last page

Analysis

Count Over

This method performs a very large number of logical reads, and requires a memory grant of almost 46MB for sorting. A look at the relevant part of the execution plan for this method reveals the causes:

COUNT(*) OVER() is implemented using a special kind of sub-expression spool, known as a Segment Spool. The idea is to break the input up into groups (the Segment iterator), write each group to a worktable (the Spool), count the rows using a Stream Aggregate, and then join the count back onto the original rows as a new column.

The high number of logical reads incurred by this method is caused by the joins and by replaying the spooled rows twice: once to compute the row count, and then again to join the count back onto each row. The logical writes are caused by writing the rows to the spool.

The large memory grant is requested by the highlighted Sort operator. In current versions of SQL Server, the optimiser introduces this sort to guarantee the order of rows presented to a TOP operator later in the plan (not shown for space reasons). The required sort order is the same as that provided by the initial Index Seek - perhaps future optimisers will be able to take advantage of that and avoid this expensive sort altogether.

The million-row sort also contributes to the high CPU utilisation of this method.

Double Row Number

This method is the slowest overall, with high CPU usage, a large memory grant, and the largest number of physical reads.

Although the initial Index Seek provides rows in the correct order for the first row numbering operation, an explicit sort is required for the second.

Another explicit sort (the Top N Sort) is required to select the keys for the single page requested. Ironically, this sort puts the rows back in the original order provided by the Index Seek.

The two sorts both have to process one million rows, though the memory granted for the first sort can be reused by the second.

Sub-Query

The sub-query method produces a nice simple plan, and performs very well:

The top row of the plan performs the count sub-query. Since the query is guaranteed to produce a single row, it can be joined directly to the Index Seek that provides the keys for the page of data to return.

The lower Index Seek provides page keys in sorted order, so for page one, it only needs to return the first 25 keys. The biggest cost in this plan is counting the million rows in the Stream Aggregate.

Indexed View

This is the best-performing solution overall:

This plan is very similar to that produced by the sub-query method, but instead of counting one million rows, the top row of the plan is able to sum the partial counts stored in the indexed view - so only 695 rows flow through the Stream Aggregate (rather than one million).

This dramatic reduction in row count pays dividends across all the performance categories. In particular, it reduces the number of data and index pages which must be read in to the data cache from disk.

Conclusion

The count over and double row number methods are not really suited to large data sets, due to the cost of the sorts and spools.

The sub-query method is much more efficient, and is limited only by the costs associated with counting the qualifying rows.

The indexed view method improves further on the sub-query method, by maintaining useful partial aggregates. This is similar to the idea of keeping counts in a separate table using a system of triggers.

Resources:

Optimising Server-Side Paging Part II.sql

By Paul White, 2010/05/24

Total article views: 948 | Views in the last 30 days: 948 

출처 : http://www.sqlservercentral.com/articles/paging/70120/

'Brain Trainning > DataBase' 카테고리의 다른 글

MSSql 2005 SP 적용  (0) 2010.06.18
DBA 일간 체크리스트  (0) 2010.06.03
Replication Across Non-Trusted Domains (펌)  (0) 2010.05.17
Optimising Server-Side Paging (펌)  (0) 2010.05.04
Server Side Paging with SQL Server 2005 (펌)  (0) 2010.05.04
posted by LifeisSimple
2010. 5. 17. 19:07 Brain Trainning/DataBase

Replication Across Non-Trusted Domains or Using the Internet

Replication over the internet or across non-trusted domains is usually performed using a virtual private network (VPN), and consequently the configuration is much the same as that used on a LAN. This article outlines what to do if such a VPN is not available. While it's true that in almost all cases you wouldn't consider making SQL Server available directly to the internet, loosely -coupled networks in a hosted environment are commonplace. These systems ensure integrity by the use of firewall rules, resulting in DMZ/ECZ layers and causing serious issues for the would-be replicator. This is very much a 'how-to' guide, as the intricacies of network connectivity setup would otherwise make the article too long.
In replicating over non-trusted networks, the main additional considerations over and above a normal setup are:

(a) when configuring the subscriber, the publisher will not initially be visible in enterprise manager, and cannot be added in the usual way.

(b) after the snapshot has been created, it is normally accessed by the merge/distribution agent using the repldata share, which will be unavailable over the internet.

(c) the replication agents normally run under the context of a windows user account and are impersonated at the distributor/publisher, however an AD lookup at the publisher when accessing the snapshot files will not be able to find the windows user.

What follows is a list of the configuration changes necessary to solve these issues.

(i) Network Configuration - Ports etc

SQL communication needs to be enabled on the port defined at the publisher. For a default instance this is typically port 1433 while FTP uses port 21 by default. So, rules allowing the subscriber's IP address access on these ports through the firewall need to be set up, implying that the subscriber must not get a leased DHCP IP address, but have a fixed address of its own. In SQL Server 2005 merge replication is possible over port 443 (SSL/HTTP) - more on that in another article - but for our case, the security is provided by the use of logins and passwords, firewall rules, and encryption if necessary.

(ii) Publication Configuration

The repldata share on the publisher that is normally used to hold the snapshot initialization files will be unavailable - discounting the use of a guest windows login at the publisher which is simply too insecure a method to consider - so an alternative initialization method is needed. The main choices are automatic FTP and alternative snapshot locations at the subscriber. For the latter, when a pull subscription is created, it can be pointed at a local directory for the snapshot files. I often use this method as my snapshot files are so large (>40GB) that I save a lot of time doing the whole thing manually. Basically, this involves the creation of the snapshot, compressing the directory using WinZip 9.0, manually FTPing to the subscriber then unziping locally. After that I use the alternative snapshot location (@alt_snapshot_folder) on the subscriber to initialize. This method is used because although compression using CAB files is available within the publication wizard, it is limited to a maximum size of 2GB.


This article however will concentrate on the inbuilt, automatic method of FTP which is configured at the publisher on the snapshot location tab. If all the subscribers need to be set up in the same way (non-trusted domains etc) then on this tab only the second checkbox is selected at the top. The snapshot folder lies under the FTPROOT directory and 'Subscribers can access this folder using FTP' obviously needs to be selected. If you do choose to compress, the files will be uncompressed using the Temp folder of the subscriber, so you must check that you have enough space. The client path is the path from the FTP root directory, so in this case is \TRSnapshot\ftp. The login can be left as the default for anonymous access (not recommended) or a valid windows login at the publisher is selected that has read access to the FTPROOT directory – PaulI in this case.

(iii) alias configuration at subscriber

In order for the subscriber to 'see' the publisher, an alias must be created in the client network utility before registering the publisher in Enterprise Manager. This is mandatory as the use of an IP address in Enterprise Manager will later result in various errors. Be sure to:
(a) use TCP/IP and not named pipes, otherwise you'll get a "SQL server does not exist or access is denied." error.
(b) create an alias that has exactly the same name as the publisher/distributor.


(iv) hosts file configuration at the subscriber

The FTP request at the subscriber needs to be able to associate the publisher/distributor's NETBIOS name with its IP address. As we have a non-trusted environment, this entry won't exist on the DNS on the subscribers’s LAN, so we manually add it to the HOSTS file on the subscriber, located in C:\WINNT\system32\drivers\etc. After the hashed out section, the file should look like the text below, where DOHXXX-SQL is the Publisher/Distributor's netbios name.

127.0.0.1 localhost
62.6.139.105 DOHXXX-SQL


If this is omitted, the error message will be the FTP initialization error : “Message: The process could not connect to FTP site 'DOHCOL-SQL' using port 21”.

(v) Create a PULL Subscription

When using the Pull subscription wizard, the ‘Choose Publication’ form allows browsing of the publisher’s publications. The publisher can be added to EM at this stage or already be configured. If everything has been correctly set up using the previous steps, the publications list should be visible. The synchronization agent login should be set to use a SQL login (pass-through is possible but let’s leave that for now) already added to the PAL. If the subscriber is registered at the publisher, this can be a named subscription, otherwise it’ll be anonymous. The only other thing to set up is to select the option to ‘Download using FTP’ on the Snapshot Delivery form.

posted by LifeisSimple