Standard configuration of tempdb (video)

The tempdb system database is used for many purposes from materializing temporary tables to storing row version information for snapshot isolation. The default configuration of tempdb may not perform well for many production workloads. In this video, three important considerations to configure tempdb for optimal performance are discussed, including a more in-depth visualization about why creating more data files can be a big advantage.



Video Resources:

Read more about sys.dm_os_waiting_tasks
Read more about sys.dm_os_wait_stats
Read more about DBCC SQLPERF
sys.dm_os_waiting_tasks query to find PAGELATCH_* waits

An advanced option I didn't mention in the video is to enable Trace Flag 1118 (note: this trace flag is undocumented), which changes the allocation behaviour in tempdb to not use mixed extents. You can read more about this setting here.


Managing the size of the transaction log

When we manage databases in either the FULL or BULK_LOGGED recovery models, we need to pay particular attention to the size of the transaction log files. If our processes aren't optimal, we can see log files grow either out of control, or beyond what we think is a reasonable size.


Virtual Log Files

As I mentioned in a previous post, the transaction log contains an ordered sequence of the physical operations that occur on a database.

What I didn't mention is that the physical transaction log file also contains logical sections, called virtual log files (VLFs). It's the VLFs which actually contain the physical operations I mentioned. The purpose of VLFs is to allow SQL Server to more efficiently manage the log records; specifically, to know which portions of the transaction log are used, and which aren't.

Knowing the portions of the log that are used is important when we go to take a transaction log backup, which creates a copy of all the transaction log records since the last transaction log backup. After a transaction log backup, as long as all the log records within a VLF are not part of an active transaction (or are required for some other feature, such as replication), the VLF can be marked as unused/inactive. This allows SQL Server to reuse that portion of the log file. This process is called many names, including clearing the log and log truncation. It does not affect the physical size of the log file.

Problems only start happening if all the VLFs end up being used, and none are available1. This means the log file (the physical file) has to grow to accommodate more transactions. A physical log file growth automatically creates new unused VLFs, and so the transactions can continue.


What is causing the log to grow after I shrank it?

Any write operation in the database can potentially cause the log file to grow. The real question is: why are all the VLFs used up when that's not what is expected?

Here are some of the problems you might encounter:

  • Not taking transaction log backups frequently enough, or not taking them at all. The only way to mark VLFs as inactive is to take a transaction log backup (again, only for FULL and BULK_LOGGED databases). If transaction log backups are only taken once/day, besides exposing yourself to a full day of data loss, you also need a transaction log file large enough to hold all the transactions that occur for a full day. That could be quite a bit of space! The solution is obvious, and it's a win for recoverability, too: take transaction log backups more frequently.
  • Overly aggressive and/or too frequent index maintenance. I'm certainly a proponent of index maintenance, but it's very easy to get carried away. Maintenance plans that rebuild shiny new indexes for you every night are both ridiculously easy to set up, and also ridiculously bad for the size of the transaction log. Typically, rebuilding all the indexes of a database takes more log space than the data files actually take up. If you're dealing with databases of even small-to-moderate sizes (say, 1 GB and larger), this can add up to a tremendous amount of wasted storage space, because all that transaction log has to be backed up at some point, and the log file will likely end up larger than the data files. What I strongly recommend doing is putting in place a much more selective index maintenance process, such as Ola Hallengren's index maintenance scripts, which I use in production.
  • The database is in FULL or BULK_LOGGED when it actually should be in SIMPLE. Before you go flipping the settings around (which can potentially cause data loss), go read about choosing the right recovery model for the database. Typically, I see this happen when a production backup is restored to a development environment, and a backup gets taken for some reason. And then the transaction log ends up growing, and growing, and growing as developers test and roll back their work, right up until things screech to a halt when the drive that contains the log file has filled up.
  • A SQL Server feature, such as replication or mirroring, is enabled. Until the transactions have been copied to the secondary server, the records cannot be cleared from the transaction log. If the secondary is slow applying changes, or if you don't even have a secondary any more and things weren't cleaned up correctly, this could lead to infinite transaction log growth. To solve this really depends on where the specific problem is -- solve that first, and then see if there is still an issue with the transaction log.


If none of those cover your scenario, you can run the following query, which will tell you why the transaction log is still in use.

    FROM sys.databases


1 Extra empty log space is pre-allocated such that there is always enough space for active transactions to be able to roll back. Regardless, that space is "used" in the sense that it's been spoken for already.


Enable Instant Data File Initialization (video)

Any time a new portion of a database file (data file or log file) is created, by default, SQL Server writes out zeros -- to start with a clean slate, so to speak.

As the name implies, instant data file initialization is a feature that allows SQL Server to skip the zeroing process for data files. Log files are always zeroed out (because of the way they work internally).

The reason why you'd want to turn on this feature is simple: skipping the zeroing process does less I/O, thereby speeding up the process. This gives many advantages:

  • Recover faster: The tempdb database must be recreated on instance restart. This could be particularly important if you have an availability SLA to maintain.
  • Restore faster: Data files are initialized before the actual backed-up data pages are copied into the data files.
  • Better response time: If a data file has to auto-grow because of user activity, the user won't have to wait as long for the operation to complete.

Now, of course there is a tradeoff here, and in this case it's security-related: because no zeroing of the files happens with this setting turned on, it may be possible (through erroneous SQL Server behaviour) to access the previously-written data on the disk, which could be absolutely anything. While this is a very, very small risk, it may be too much for your environment. If that's the case, this setting should be turned off (which is the default setting).

In this short video demo, I walk through the steps to enable the feature, and validate that it's been successfully enabled.



Summary of steps:

  1. Add the database engine service account local group/domain group/service SID to the Perform Volume Maintenance Tasks security policy. (Note: this can also be accomplished using Windows Active Directory Group Policy, if that's a better solution for your environment.)
  2. Restart the database engine service.
  3. Validate the feature is enabled using the script below. There shouldn't be any messages in the Error Log that indicate data files were zeroed.


Moving the System Databases (video)

Occasionally after installation, the system databases need to be moved from one location to another. In this video demo, I show you how to accomplish this task.



Key points:

  • Make sure the database engine service account has read/write access to the new file location(s).
  • The paths to the master database files are in the service startup parameters.
  • The paths to the other databases files are changed using ALTER DATABASE ... MODIFY FILE.
  • The database files themselves can't be copied until the database engine service is stopped.


Two things about security which I didn't mention in the video:

  • When granting permissions on the new data container, the principle for the database engine service account will vary depending on your environment. In my case, it was a local group, but you may need to use a domain group, or a service SID. It should be obvious from the source data container.
  • You may want to grant the Everyone group permission to list the folder contents starting from the root, and remove the inherited permission on each data container. This will allow you to browse to the drive when attaching a database (for example) in Management Studio, yet limit access to only the account that should see the files in each instance folder.

Finding unattached database files using xp_cmdshell

Sometimes database files get left behind in the file system, maybe due to choosing to restore to a different location, or someone detached a database and forgot about it. Maybe there are rogue files in your SQL Server data folder.

To find out if there are any, here is a solution that uses the extended stored procedure xp_cmdshell to look in an instance's data and log folders for files that don't also appear in sys.master_files. This solution does not require that you have direct permission to the database files, but you will need sysadmin privileges in the instance. For safety reasons, the script only lists unattached files, it does not actually delete them. That part is up to you.


EXEC sp_configure 'show advanced options', 1;
EXEC sp_configure 'xp_cmdshell', 1;

DECLARE @HkeyLocal nvarchar(18) = N'HKEY_LOCAL_MACHINE';
DECLARE @MSSqlServerRegPath nvarchar(31) = N'SOFTWARE\Microsoft\MSSQLServer';
DECLARE @InstanceRegPath sysname = @MSSqlServerRegPath + N'\MSSQLServer';
DECLARE @dataFolder nvarchar(512);
DECLARE @logFolder nvarchar(512);

EXEC master.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultData', @dataFolder OUTPUT;
EXEC master.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultLog', @logFolder OUTPUT;

DECLARE @cmd nvarchar(1024);

DECLARE @files table (physical_name nvarchar(MAX));

SET @cmd = N'dir "' + @dataFolder + N'" /s /b';
INSERT INTO @files(physical_name) EXEC xp_cmdshell @cmd;

SET @cmd = N'dir "' + @logFolder + N'" /s /b';
INSERT INTO @files(physical_name) EXEC xp_cmdshell @cmd;

DELETE FROM @files WHERE (physical_name IS NULL) OR (RIGHT(physical_name, 4) = '.cer');

			FROM @files
	) f
				FROM sys.master_files mf
				WHERE mf.physical_name = f.physical_name


EXEC sp_configure 'xp_cmdshell', 0;
EXEC sp_configure 'show advanced options', 0;