Wednesday, September 18, 2024

SQL Server Database Settings Performance Checklist

Each Database Needs to Be Audited

As part of your performance audit, you need to examine each database located on your server and examine some basic database settings. When compared to some of our other performance audit tasks, you will find this audit one of the easiest. For convenience, you may want to consider photocopying a copy of the above chart, producing one copy for each database that you will be auditing.

As a part of our database settings audit, we will be taking a look at two different types of settings: database options and database configuration settings. As in previous sections of our performance audit, we will only focus on those database settings that are directly related to performance, ignoring the rest.

Both database options and database configuration settings can be viewed or modified using Enterprise Manager (my preference, as it is easier) or modified with the ALTER DATABASE command. In addition, for the database options only, you can also use the sp_dboption system stored procedure to view and modify them, but Microsoft is trying to phase this command out, and discourages its use (as of SQL Server 2000).

The first section of the database settings performance checklist focuses on database options, and the second section focuses on database configuration settings.

Viewing Database Options

In this section, we will only be taking a look at six of the many database options that in one way or another can affect performance. The best way to view the current settings is to use Enterprise Manager, following these steps (These steps assume you are using SQL Server 2000):

  • In Enterprise Manager, display all of the databases for your server.
  • Right-click on the database you want to examine and select “Properties.”
  • From the Properties dialog box, select the “Options” tab.
  • From this screen, you can see all of the relevant database options. Note that not every database option can be seen here, but all the ones that we are interested are all listed here. Let’s take a look at the performance-related ones and see how they affect SQL Server’s performance.

    Auto_Close

    This database option is designed for use with the Desktop version of SQL Server 7.0 and 2000, not for the server versions. Because of this, it should not be turned on (which it is not, by default). What this option does is to close the database when the last database user disconnects from the database. When a connection requests access to the database after it has been closed, then the database has to be reopened, which takes time and overhead.

    The problem with this is that if the database is accessed frequently, which is the most likely case, then the database may close and reopened often, which puts a large performance drag on SQL Server and the applications or users making the connection.

    As part of your audit, if you find this option turned on, and you are not using the desktop version of SQL Server, then you will need to research why it was turned on. If you can’t find the reason, or if the reason is poor, turn this option off.

    Auto_Create_Statistics

    When auto_create_statistics is turned on (which it is by default), statistics are automatically created on all columns used in the WHERE clause of a query. This occurs when a query is optimized by the Query Optimizer for the first time, assuming the column doesn’t already have statistics created for it. The addition of column statistics can greatly aid the Query Optimizer so that it can create an optimum execution plan for the query.

    If this option is turned off, then missing column statistics are not automatically created, when can mean that the Query Optimizer may not be able to produce the optimum execution plan for the query, and the query’s performance may suffer. You can still manually create column statistics if you like, even when this option is turned off.

    There is really no down-side to using this option. The very first time that column statistics are created, there will be a short delay as they are created before the query runs for the first time, causing the query to potentially take a little longer to run. But once the column statistics have been created, each time the same query runs, it should now run more efficiently than if the statistics did not exist in the first place.

    As part of your audit, if you find this option turned off, you will need to research why it was turned off. If you can’t find the reason, or if the reason is poor, turn this option on.

    Auto_Update_Statistics

    In order for the Query Optimizer to make smart query optimization decisions, the column and index statistics need to be up-to-date. The best way to ensure this is to leave the auto_update_statistics database option on (the default setting). This helps to ensure that the optimizer statistics are valid, helping to ensure that queries are properly optimized when they are run.

    But this option is not a panacea. When a SQL Server database is under very heavy load, sometimes the auto_update_statistics feature can update the statistics on large tables at inappropriate times, such as the busiest time of the day.

    If you find that the auto_update_statistics feature is running at inappropriate times, you may want to turn it off, and then manually update the statistics (using UPDATE STATISTICS) when the database is under a less heavy load.

    But again, consider what will happen if you do turn off the auto_update_statistics feature. While turning this feature off may reduce some stress on your server by not running at inappropriate times of the day, it could also cause some of your queries not to be properly optimized, which could also put extra stress on your server during busy times.

    Like many optimization issues, you will probably need to experiment to see if turning this option on or off is more effective for your environment. But as a rule of thumb, if your server is not maxed out, then leaving this option on is probably the best decision.

    Auto_Shrink

    Some databases need to be shrunk periodically in order to free up disk space as older data is deleted from the database. But don’t be tempted to use the auto_shrink database option, as it can waste SQL Server resources unnecessarily.

    By default, the auto_shrink option is turned off, which means that the only way to free up empty space in a database is to do so manually. If you turn this option on, SQL Server will then check every 30 minutes to see if it needs to shrink the database. Not only does this use up resources that could better be used elsewhere, it also can cause unexpected bottlenecks in your database when the auto_shrink process kicks in and does its work at the worst possible time.

    If you need to shrink databases periodically, perform this step manually using the DBCC SHRINKDATABASE or DBCC SHRINKFILE commands, or you can use the SQL Server Agent or create a Database Maintenance Plan to schedule regular file shrinking during less busy times.

    As part of your audit, if you find this option turned on, you will need to research why it was turned off. If you can’t find the reason, or if the reason is poor, turn this option off.

    Read_Only

    If a database will be used for read-only purposes only, such as being used for reporting, consider setting the read_only setting on (the default setting is off). This will eliminate the overhead of locking, and in turn, potentially boost the performance of queries that are being run against it. If you need to modify the database on rare occasions, you can also turn the setting off, make your change, then turn it back on.

    Torn_Page_Detection

    Because data pages in SQL Server (8K) and NT Server or Windows Server (512 bytes) are different sizes, it is possible during power failures, or if you are have disk driver or physical disk problems, for your database to become corrupted.

    Here’s why. Every time the operating system writes an 8K SQL Server data page to disk, it must break up the data into multiple 512 byte pages. After the first 512 byte of data is written, SQL Server assumes that the entire 8K has been written to disk successfully. So if the power should go out before all of the 512 byte pages that make up the 8K SQL Server page are written, then SQL Server does not know what has happened. This is known as a torn page.

    As you can imagine, this corrupts the data page, and in effect makes your entire database corrupt. There is no way to fix a database made corrupt due to a torn page, except by restoring a known good backup. One of the best ways to prevent this problem is to ensure your server has battery backup. But this does not prevent all problems, because a defective disk driver can also cause similar problems (I have seen this.)

    If you are worried about getting torn pages in your SQL Server databases, you can have SQL Server tell you if they occur (although it can’t prevent them or fix them after they have occurred). There is a database option called “torn page detection” that can be turned on and off at the database level. If this option has been turned on, and if a torn page is discovered, the database is marked as corrupt and you have little choice but to restore your database with your latest backup.

    In SQL Server 7.0, this option is turned off by default, and you must turn it on for every database you want it on for. In SQL Server 2000, this option is turned on by default for all databases.

    So what’s the big deal, why not just turn it on and be safe? The problem is that turning this feature on hurts SQL Server’s performance. Not much mind you, but if you already have a SQL Server that is maxed out, then it might make a noticeable difference, and you may want to keep this option turned off. As a DBA, you must weight the pros and cons of using this option, and make the best decision for your particular situation.

    Viewing Database Configuration Settings

    In this section, we will only be taking a look at three database configuration settings, and examine how they can affect performance. The best way to view these is to use Enterprise Manager, following these steps (These steps assume you are using SQL Server 2000):

  • In Enterprise Manager, display all of the databases for your server.
  • Right-click on the database you want to examine and select “Properties.”
  • From the Properties dialog box, select the “Options” tab to see the compatibility level, select the “Data Files” tab to see the database auto grow setting, and select the “Transaction Log” tab to see the transaction log auto grow setting.
  • Let’s take a look at each of the three relevant database configuration settings.

    Compatibility Level

    SQL Server 7.0 and 2000 have a database compatibility mode that allows applications written for previous versions of SQL Server to run under SQL Server 7.0 or 2000. In you want maximum performance for your database, you don’t want to run your database in compatibility mode (not all new performance-related features are supported).

    Instead, your databases should be running in native SQL Server 7.0 or 2000 mode (depending on which version you are currently running). Of course, this may require you to modify your application to make it SQL Server 7.0 or 2000 compliant, but in most cases, the additional work required to update your application will be more than paid for with improved performance.

    SQL Server 7.0 compatibility level is referred to as “70” and SQL Server 2000 compatibility level is referred to as “80”.

    Database and Transaction Log Auto Grow

    We will be discussing both database auto grow and transaction log auto grow together because they are so closely related.

    If your set your SQL Server 7.0 or SQL 2000 databases and transaction logs to grow automatically (which is the default setting), keep in mind that every time this feature kicks in, it takes up a little extra CPU and I/O time. Ideally, we want to minimize how often automatic growth occurs in order to reduce unnecessary overhead.

    One way to help do this is to size the database and transaction logs as accurately as possible to their “final” size. Sure, this is virtually impossible to get right-on-target. But the more accurate your estimates (and some times it takes some time to come up with a good estimate), the less SQL Server will have to automatically grow its database and transaction logs, helping to boost performance of your application.

    This recommendation in particular is important to follow for transaction logs. This is because the more times that SQL Server has to increase the size of a transaction log, the more transaction log virtual files that have to be created and maintained by SQL Server, which increases recovery time, should your transactions log need to be restored. A transaction virtual file is used by SQL Server to internally divide and manage the physical transaction log file.

    The default growth amount is 10% for databases and transaction logs. This automatic growth number may or may not be ideal for your database or transaction log. If you find that your database or log is growing automatically often (such as daily or several times a week), change the growth percentage to a larger number, such as 20% or 30%. Each time the database or log has to be increased, SQL Server will suffer a small performance hit. By increasing the amount the database grows each time, the less often it will have to grow.

    If your database is very large, 10GB or larger, you may want to use a fixed growth amount instead of a percentage growth amount. This is because a percentage growth amount can be large on a large database. For example, a 10% growth rate on a 10GB database means that when the database grows, it will increase by 1GB. This may or may not be what you want. If this is more than you want, then choose a fixed growth rate, such as 100MB at a time, might be more appropriate.

    As part of your audit, you will need to carefully evaluate your databases to see how the above advice applies to them, then take the appropriate action.

    Now What?

    Your goal should be to perform this part of the performance audit, described on this page, for each of the databases in each of your SQL Servers, and then use this information to make changes as appropriate, assuming you can.

    Once you have completed this part of the performance audit, you are now ready to audit the use of indexes in your databases.

    *Originally published at SQL-Server-Performance.com

    Brad M. McGehee is a full-time DBA with a large manufacturing company, and the publisher of http://www.SQL-Server-Performance.Com, a website specializing in SQL Server performance tuning and clustering.

    He is an MVP, MCSE+I, MCSD, and MCT (former).

    Brad also runs another website called http://www.WorldClassGear.com It provides independent gear reviews for backpackers, trekkers, and adventure travelers.

    Related Articles

    3 COMMENTS

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Latest Articles