Managing the Size of the Transaction Log

When we manage databases in either the FULL or BULK_LOGGED recovery models, we need to pay particular attention to the size of the transaction log files. If our processes aren’t optimal, we can see log files grow either out of control, or beyond what we think is a reasonable size.

Virtual Log Files

As I mentioned in a previous post, the transaction log contains an ordered sequence of the physical operations that occur on a database.

What I didn’t mention is that the physical transaction log file also contains logical sections, called virtual log files (VLFs). It’s the VLFs which actually contain the physical operations I mentioned. The purpose of VLFs is to allow SQL Server to more efficiently manage the log records; specifically, to know which portions of the transaction log are used, and which aren’t.

Knowing the portions of the log that are used is important when we go to take a transaction log backup, which creates a copy of all the transaction log records since the last transaction log backup. After a transaction log backup, as long as all the log records within a VLF are not part of an active transaction (or are required for some other feature, such as replication), the VLF can be marked as unused/inactive. This allows SQL Server to reuse that portion of the log file. This process is called many names, including clearing the log and log truncation. It does not affect the physical size of the log file.

Problems only start happening if all the VLFs end up being used, and none are available1. This means the log file (the physical file) has to grow to accommodate more transactions. A physical log file growth automatically creates new unused VLFs, and so the transactions can continue.

What is causing the log to grow after I shrank it?

Any write operation in the database can potentially cause the log file to grow. The real question is: why are all the VLFs used up when that’s not what is expected?

Here are some of the problems you might encounter:

  • Not taking transaction log backups frequently enough, or not taking them at all. The only way to mark VLFs as inactive is to take a transaction log backup (again, only for FULL and BULK_LOGGED databases). If transaction log backups are only taken once/day, besides exposing yourself to a full day of data loss, you also need a transaction log file large enough to hold all the transactions that occur for a full day. That could be quite a bit of space! The solution is obvious, and it’s a win for recoverability, too: take transaction log backups more frequently.
  • Overly aggressive and/or too frequent index maintenance. I’m certainly a proponent of index maintenance, but it’s very easy to get carried away. Maintenance plans that rebuild shiny new indexes for you every night are both ridiculously easy to set up, and also ridiculously bad for the size of the transaction log. Typically, rebuilding all the indexes of a database takes more log space than the data files actually take up. If you’re dealing with databases of even small-to-moderate sizes (say, 1 GB and larger), this can add up to a tremendous amount of wasted storage space, because all that transaction log has to be backed up at some point, and the log file will likely end up larger than the data files. What I strongly recommend doing is putting in place a much more selective index maintenance process, such as Ola Hallengren’s index maintenance scripts, which I use in production.
  • The database is in FULL or BULK_LOGGED when it actually should be in SIMPLE. Before you go flipping the settings around (which can potentially cause data loss), go read about choosing the right recovery model for the database. Typically, I see this happen when a production backup is restored to a development environment, and a backup gets taken for some reason. And then the transaction log ends up growing, and growing, and growing as developers test and roll back their work, right up until things screech to a halt when the drive that contains the log file has filled up.
  • A SQL Server feature, such as replication or mirroring, is enabled. Until the transactions have been copied to the secondary server, the records cannot be cleared from the transaction log. If the secondary is slow applying changes, or if you don’t even have a secondary any more and things weren’t cleaned up correctly, this could lead to infinite transaction log growth. To solve this really depends on where the specific problem is — solve that first, and then see if there is still an issue with the transaction log.

If none of those cover your scenario, you can run the following query, which will tell you why the transaction log is still in use.

SELECT
    name,
    log_reuse_wait_desc,
    recovery_model_desc
    FROM sys.databases

 
1 Extra empty log space is pre-allocated such that there is always enough space for active transactions to be able to roll back. Regardless, that space is “used” in the sense that it’s been spoken for already.

Where does SQL Server store my data?

In this post, I’m going to give a high-level overview of where SQL Server stores user data. One of the most important topics, backup and restore, is closely related to how this (and logging) works, so it’s important to have a solid understanding of these concepts to be able to plan and execute a successful backup and restore strategy. I’ll cover logging and the transaction log in a future post.

The structures in which SQL Server stores data are arranged hierarchically. We’re going to start with the smallest relevant units and work our way up.

Pages and Extents

In the SQL Server database engine, a page is an 8 KB region of storage (applies to both memory and disk). Each page has a header region that stores metadata about the page (the page’s internal id, the type of the page, etc.), and a data region that contains the actual data.

While there are many different types of pages, there are only two basic classes:

  • Data Pages – Contains information related to data rows themselves.
  • Metadata Pages – Contains state information about other pages or extents.

Pages are grouped into sets of 8 (for a total of 64 KB), called extents, which makes the allocation system more efficient. Most extents only contain a single type of page1.

Partitions

Logically groups extents into objects. A normal table is stored in a single partition, while a partitioned table is stored in multiple partitions. This is how ALTER TABLE … SWITCH is able to efficiently move partitions to/from an independent table, or between partitioned tables — a partition is a table.

Data Files

A data file is the basic unit of a database in the file system2, and it contains some or all of the extents that make up a partition (see next section on filegroups).

All databases have a primary data file, which stores structural (metadata) information about the database, and by convention has the file extension .mdf. The primary data file cannot be removed from the database.

Secondary data files can be added, which have the extension .ndf by convention.

Filegroups

A filegroup is a named logical construct that contains data files. (Note: log files do not belong to filegroups.) All databases have a PRIMARY filegroup that contains (at least) the primary data file. The PRIMARY filegroup cannot be removed. Filegroups (except PRIMARY) can be set as read-only.

A filegroup allows for the distribution of extent and page allocations between multiple data files. SQL Server uses two algorithms simultaneously when making allocations:

  1. Round-robin. Allocations are made sequentially from the data files within the filegroup (i.e., for a filegroup with 3 files, the allocations would come from File1, File2, File3, File1, File2, File3, etc.).
  2. Proportional fill. The number of allocations made from a file is proportional to the amount of free space in that file. The purpose of this is to eventually reach a steady-state where allocations are balanced between all the files. However, you can get into trouble by adding a new, empty, data file, which can result in almost all the allocations coming from that one file, probably defeating the purpose of the allocation distribution among the files in the first place.

Filegroups also serve as the target for object storage. For example, the table [dbo].[Sales] can be stored in a filegroup called SALES.

Database

In this context, a database is a logical container that contains one or more filegroups (PRIMARY, plus zero or more others). One of the database’s filegroups can be set as the default filegroup, where objects get created if the T-SQL statement does not explicitly specify a filegroup.

 

Now that you understand the basic internal storage structures SQL Server uses, you will be able to easily understand how many other areas of the system work, including backups, which is an important topic I will cover in a future post.

 

1 Mixed extents can contain multiple types of pages. These are a small minority of allocations.

2 Not getting into FILESTREAM or snapshot files here.

How are deadlocks created? (Video)

Deadlocks occur when two processes both hold locks on resources, and also try to take locks on resources from the other process. This results in neither process being able to continue.

SQL Server resolves these situations automatically by terminating one of the processes, thus allowing the other(s) to proceed. This is done by an internal process called the Deadlock Monitor.

In this video demo, I demonstrate a basic sequence of events needed to create a deadlock. I also look at the locks that are held and requested by each process, and introduce you to reading a deadlock graph (which is what’s captured from the Extended Events deadlock capturing system I presented last week).

(Sorry my cursor isn’t visible in the video — I blame the capture program I used. It won’t happen again.)

If you want to reproduce the situation yourself, you can download the scripts here.

The Primary Key vs. The Clustered Index

When it comes to clustered vs. non-clustered indexes in SQL Server, there’s still a lot of confusion with those types of indexes, and constraints, particularly the primary key. Frankly, that’s entirely understandable, because it is a bit confusing.

Let me try to simplify.

First, there are two ways indexes can be created:

  1. Directly (i.e., CREATE … INDEX).
  2. Certain types of constraints (UNIQUE, and PRIMARY KEY) create indexes to implement the constraint.

The behind-the-scenes constraint indexes appear in both Management Studio and sys.indexes, which make the waters a bit muddy, so to speak. This is also how there is spillover in terminology and statement syntax between indexes and constraints.

The key to understanding how this works is to separate the ideas of constraints and indexes.

In theory, the constraints I mentioned above could be implemented behind the scenes using another type of structure completely different than an index. It just so happens that an index is an available, reusable, efficient structure that is able to get the job done.

Second, there is a default type of index created by each type of constraint, and the default can be overridden:

  • A PRIMARY KEY constraint creates a clustered index by default.
  • A UNIQUE constraint creates a non-clustered index by default.

Here are all the combinations, and the type of index that gets created:

/* Clustered index */
ALTER TABLE MyTable
	ADD CONSTRAINT PK_MyTable
		PRIMARY KEY(WidgetId);

/* Clustered index */	
ALTER TABLE MyTable
	ADD CONSTRAINT PK_MyTable
		PRIMARY KEY CLUSTERED(WidgetId);

/* Non-clustered index */	
ALTER TABLE MyTable
	ADD CONSTRAINT PK_MyTable
		PRIMARY KEY NONCLUSTERED(WidgetId);


/* Non-clustered index */	
ALTER TABLE MyTable
	ADD CONSTRAINT UC_MyTable_WidgetNumber
		UNIQUE(WidgetNumber);

/* Non-clustered index */	
ALTER TABLE MyTable
	ADD CONSTRAINT UC_MyTable_WidgetNumber
		UNIQUE NONCLUSTERED(WidgetNumber);

/* Clustered index */	
ALTER TABLE MyTable
	ADD CONSTRAINT UC_MyTable_WidgetNumber
		UNIQUE CLUSTERED(WidgetNumber);

Finally, there is debate about whether a simple index, or a constraint should be created, because in a lot of cases, they can function identically with respect to user queries. There are several factors that come into play when making this decision:

  • Constraints have extra restrictions on them (i.e., columns in a primary key constraint cannot allow NULLs).
  • Creating a constraint instead of an index may serve to better document data/business rules in the database.
  • Constraints take a little more effort to drop, which reduces the likelihood of a mistake.
  • It may not be possible to represent a data/business rule using a constraint, while it is possible using a normal index. Constraint indexes can’t use special features such as filters, for example.
  • Would creating an index with the same columns as a data/business rule better serve user queries? Constraint indexes don’t allow included columns. If the table is large, creating and maintaining two separate indexes, one of which may be essentially redundant, can be expensive.

My personal philosophy is to first use constraints to maintain data/business rules, and then supplement with regular indexes as necessary. Constraints alone are frequently enough to satisfy user queries; however, there are definitely situations where it’s clear that methodology should be broken. It’s very situational, and really there’s no better person qualified to evaluate the situation than you.

Summary:

  • Indexes and constraints are separate objects.
  • Be aware of the default types of indexes created by the different types of constraints.
  • Create either constraints and/or indexes based on your individual needs.

What’s in the System Databases?

We know they’re important — they sit in that special folder and all — but what do they really do? Why are they special? Let’s take a closer look.
 

master

This database is the gatekeeper of SQL Server.

It contains all the system configuration settings and logins, in addition to other instance-wide objects such as linked servers.

Most importantly, though, it contains all the information needed to find all the other databases in the instance. The SQL Server service is started by default with several command-line parameters, but two of those are the file locations of the master database data file and log file. When SQL Server starts up, it reads the master database to find out where all the databases in the instance are, including all the system databases, and then goes through the recovery process on them.

On a production system, taking regular backups of master is essential.
 

model

As the name suggests, this database is the template from which all new databases are created.

What’s in this database? By default it’s empty, and you can put pretty much anything you want in there — new databases will have those objects in it by default. (Note: this isn’t necessarily a best-practice; I’m just illustrating how it works.) Typically, this database is left empty, and database settings are adjusted to suit the environment. The recovery model is the most common setting changed.

It’s recommended to take backups of this database, even though it’s relatively easy to replace.
 

msdb

This database stores everything having to do with SQL Agent, including all the jobs and their history, schedules, and alerts. It’s also used as a staging area for other features such as Database Mail.

The biggest thing to be aware of with msdb is the size of the job history. Depending on what you have running on your system, this can grow extremely quickly, and can in some cases be very difficult to clean up. Watch the size of this database! A default installation of SQL Server lands this database’s files on the C drive, which is not a drive you want to fill up unexpectedly.

In terms of criticality, this is the second-most important system database behind master, and regular backups are essential on a production system. As mentioned, it’s also good to monitor the size of this database, and keep tabs on what kind of job history is being tracked.

tempdb

This system database is special in that unlike any of the others, it’s recreated from scratch every time the instance starts. Instant data file initialization can improve SQL Server restart time immensely because tempdb gets recreated on start up. Less restart time means higher availability.

Not only is tempdb temporary itself, it’s also responsible for holding temporary objects that get created by user scripts and internal operations. This includes things like temporary tables and table variables, row version information for snapshot isolation, and extra space as needed to satisfy user queries that do big sorts or joins.

While you can’t take backups of tempdb, it’s important for it to be well-tuned to your workload. Tuning tempdb is a topic unto itself, so I won’t discuss that further here.

distribution

This database is used by SQL Server Replication to store information about transactions that have occurred which need to be sent out to subscribers. If Replication is not configured, this database won’t exist in the list. Note also that the name of this database can vary, as it’s user-entered. You can run the following query to find distribution databases in your instance:

SELECT
	name
	FROM sys.databases
	WHERE is_distributor = 1

Resource (mssqlsystemresource)

This is SQL Server’s internal system database, and is hidden from direct viewing. It contains internal objects such as system catalog views and stored procedures that appear in the sys schema in every database.

This database can’t be backed up through SQL Server, and should not be modified.

Although this post certainly isn’t a comprehensive list, hopefully I’ve given you a basic understanding of what’s in the system databases. If you want to learn more, you can read the TechNet articles that go into more detail.