Tag Archives: sql server

Old Blog Post: SQL Server 2005 Performance Dashboard Reports

Originally Posted on December 17, 2007

Let me start off by saying this is by no means new. Microsoft released SQL Server 2005 Performance Dashboard Reports back in early March of 2007. It took me until now to stumble across the tool, mainly because I’ve been out of the thick of things from a benchmark perspective. I spent the better half of the day playing with the report. It’s quit impressive and easy to configure.

The Performance Dashboard Reports are targeted toward SQL Server Administrators and other users; the objective of the report set is to act as both a health monitoring and diagnostic tool. Although it relies upon Reporting Services definition files (.rdl), Reporting Services does not need to be installed to use the Performance Dashboard Reports. This custom report set relies upon SQL Server’s dynamic management views (DMV’s) as a data source, providing the wealth of data the dynamic management views contain, while insulating the viewers of the information from the views and the structures underlying them. No additional sources, data capture or tracing is required to access and use this storehouse of performance information. Other obvious benefits of using these prefabricated views are constant availability of the information they contain and their inexpensive nature (from the tandem perspective of collection and querying) as a source of server monitoring.

The report set comes with a primary dashboard report file, as we shall see in the hands-on installation procedure that follows. This report file is loaded directly as a custom report in SQL Server Management Studio. The other Performance Dashboard Reports are accessed via the Reporting Services drill-through mechanism, each path of which is initially entered when the user clicks a navigation link on the main page. The linkages are pre-constructed, and, once the primary dashboard report is loaded as a Custom Report in Management Studio, the rest of the reports work “out of the box” automatically, without any additional setup.

You have to start by installing the add-on. It takes about 20 seconds to install. Once you have run the installer file, go to the directory in which the installer is placed. From there you will find a sql script called setup.sql. Run this against the SQL Server database you want to report. The instructions are a little misleading. They appear to make it seem like you have to run this for every schema in your 2005 instance. That’s not the case. It’s only for every named instance you have installed. From the same directory open the performance_dashboard_main.rdl file. It will format into an XML file. Close that file and you are now ready to play with the Dashboard. To open the Dashboard, open SQL Server Management Studio. Right mouse click on the named instance. From here, select Reports followed by Custom Reports. If you navigate to your install directory, you will see the performance_dashboard_main.rdl file again. Open this and viola you have your report.

Check-out this article for screen shots.

Start with this article from William Pearson. He breaks down each and every aspect of the report. Another article from Brad McGehee on SQL-Server-Performance.com is not as descriptive as the first article, but is pretty good. While I was on the SQL-Server-Performance.com site I came across other links worth taking a look at.
Other Interesting Links

* SQL Server 2005 Waits and Queues
* DBCC SHOWCONTIG Improvements in SQL Server 2005 and comparisons to SQL Server 2000
* Troubleshooting Performance Problems in SQL Server 2005
* Script Repository: SQL Server 2005
* Top 10 Hidden Gems in SQL Server 2005
* Top SQL Server 2005 Performance Issues for OLTP Applications
* Storage Top 10 Best Practices

Microsoft Day One Review

Executive Summary
Bob Taylor came on-site for day 1 of a 5-day consulting engagement between Microsoft and Blackboard. The focus of this engagement is with regards to SQL Server 2005 Performance and Optimization. Bob arrived in the early afternoon. I started the session by reviewing the Blackboard Reference Architecture in order to provide Bob with a high-level summary of the Blackboard Academic Suite. I then discussed our most recent EDB benchmark emphasizing the changes we made in order to demonstrate our tuning and optimization process. We then spent the rest of the afternoon reviewing the system currently housing our Windows EDB benchmark. A breakdown of our discussions can be found below.

General Discussion
As part of the review of EDB, Bob and I jumped around the system (perf-2950-db2) reviewing the changes we had implemented as part of the EDB benchmark thus far. The first thing that came-up was the location of the transaction logs for each of the Blackboard schemas. Bob discussed a point that many others have mentioned before, that being we need to move the transaction log to another LUN. In our current deployment, data and transaction data files reside on the same physical LUN.

The second item we discussed was our recovery model. There are three different recovery modes (simple, bulk and full). All of the schemas are configured to Full, which means everything is being logged. Below are some points from a somewhat relevant, but out-dated article. I am going to discuss this in more detail with Bob tomorrow.

The simple recovery model allows you to recover data only to the most recent full database or differential backup. Transaction log backups are not available because the contents of the transaction log are truncated each time a checkpoint is issued for the database.

The full recovery model uses database backups and transaction log backups to provide complete protection against failure. Along with being able to restore a full or differential backup, you can recover the database to the point of failure or to a specific point in time. All operations, including bulk operations such as SELECT INTO, CREATE INDEX and bulk-loading data, are fully logged and recoverable.

The bulk-logged recovery model provides protection against failure combined with the best performance. In order to get better performance, the following operations are minimally logged and not fully recoverable: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations. Under the bulk-logged recovery model, a damaged data file can result in having to redo work manually based on the operations that are not fully logged. In addition, the bulk-logged recovery model only allows the database to be recovered to the end of a transaction log backup when the log backup contains bulk changes.

Review of TempDB
TempDB is one of the most critical areas for optimizing performance of SQL Server. Most complex query operations take place in this region of SQL Server. When configured incorrectly, performance can be dramatically affected. By default the space is not meant to shrink, nor is it recommended to auto-shrink the space. There is a penalty for this space growing. According to Bob, we need to do a more effective job at sizing this space from the start based on the needs of our application.

For example, in our environment TempDB is configured to start at 1GB and grow by a small amount based on an auto-growth policy. After a 5-hour 4 server cluster test, the TempDB grew to 2.1GB. The TempLog grew to 76MB in size. A quick test shows that a restart of services resizes the TempDB back to 1GB in size. Bob mentioned that sometimes clients might complain that after maintenance in which the database has been restarted that the system is slow or sluggish. Part of the problem could be the fact that the TempDB needs to grow. The second factor could be the flushing of the procedure cache during the restart operation.

A great resource that Bob pointed me onto is this blog from Craig Freedman. He’s apparently really big on describing how query processing works in SQL Server.

Bob also discussed GAM and SGAM (Global Allocation Map and Shared Global Allocation Map) which are important components of the TempDB. One nugget of information Bob gave us is that it is recommended that we create 1 TempDB data file per core (just data…no need to do log). So on our 8-core system that has a need for about 2+GB of space, we would need to create 8 files roughly 256MB in size each. Below are a couple other links worth reading:

  • http://support.microsoft.com/kb/328551
  • http://www.microsoft.com/technet/prodtechnol/sql/2005/workingwithtempdb.mspx
  • http://msdn2.microsoft.com/en-us/library/ms175195.aspx
  • http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/08/24/958.aspx
  • Installation and Physical Layout Best Practices
    The next area we covered was a great overview. Bob first discussed where to or better yet (where not to install) SQL Server. He mentioned not installing SQL Server on the same file system as the Operating System. He also mentioned that it would be in our best interest to configure TempDB on a third LUN. He mentioned that this could be the cause of unnecessary paging issues.

    While were talking about paging problems, Bob discussed best practices for virtual memory. Rather then setting virtual memory to 1.5 and 3 times the size of RAM, he mentioned it was in our best interest to set both min/max to 3 times the amount of RAM on the box. He also mentioned defragging the disk before persisting the virtual memory change.

    We jumped into physical file layout of the schemas. Bob wanted to understand why we had 10x4GB files under the PRIMARY File Group rather then 1x40GB file. I was surprised by this question, but essentially splitting the data across multiple files has no impact on performance unless they are on different File Groups and on different LUNs. The conversation jumped into clustered indexes. We looked at a few functions worth spending a little more time reviewing:

  • sys.dm_db_index_physical_stats
  • sys.dm_db_index_operational_stats
  • sys.dm_db_index_usage_stats
  • Other Links to Review

  • http://blogs.msdn.com/psssql/archive/2007/12/18/rml-utilities-for-microsoft-sql-server-released.aspx
  • http://support.microsoft.com/kb/944837
  • http://support.microsoft.com/kb/887057
  • Quest Software Webinar on SQL Server Under the Hood

    Every now and then a webinar invitation comes my way. I typically glance at the invite and either forward it on to the team or throw it away. A few weeks back I received an enticing invitation from Quest to figuratively Go Under the Hood of SQL Server 2008. I thought it was going to be a discussion of SQL Server 2008. It turns out it was just another marketing presentation showing off Spotlight for SQL Server.

    The session was pretty well attended. There were probably 200 other callers. A DBA from My Space was the primary presenter. I totally forgot that myspace.com was one of the world’s largest Windows environments. The SQL 2008 learning series I took a few months back actually mentioned that so it wasn’t news to me. There were a couple of key things I got from the session.

    * We have now had this license in place for a year and we are not taking advantage of this tool.
    * Spotlight has the ability to recall information from historical events within the database.

    The fact that we haven’t been using the Spotlight tools is really troubling me. It will be on the team’s 2008 objectives ASAP. In fact, I think I’m going to spend some time coaching our team on how to use the tool as well to help augment their DBA skills. In regards to collecting historical information, this can be pretty enticing. With the EDB model of test at night, analyze during the day, we need to find a way to be able to look into the system after a night of testing. The question that we will need to figure out is whether the data comes from the Spotlight repository or direct from dynamic views within SQL Server. If it goes in Spotlight, then no worries as we can benchmark and restore as much as we want. If it’s in the SQL Server views, then we will need the team to not restore at the end of the night. That would leave us with only one test to review.

    There were a few links worth looking at:

  • SQLServerPedia
  • SQLCrunch
  • Quest SQL Server Community Site