Three Layers of Fragmentation

What is Fragmentation?

When the logical order of a storage unit doesn't match the physical order on a storage device, the storage unit is said to be fragmented. In other words, if there are storage units that contain A B C D E, but the physical order is D E C B A, this means the storage device may have to do random access instead of sequential access. This can be a big deal if the device is slow at doing random accesses vs. sequential, as is the case with hard drives of the spinning/magnetic variety.


Layer 1: Fragmented Indexes

This is the most granular level. Your database's indexes are fragmented when the pages and/or extents are out of order within the data file. The amount of fragmentation can be measured by using sys.dm_db_index_physical_stats.

Potential causes: Random index inserts; updates that require a row's size to increase.

Solution: Install and periodically run an intelligent index maintenance solution, such as Ola Hallengren's.


Layer 2: Operating System Files

A database is physically persisted in files in a file system. The files are fragmented when the allocation units that make up the files are out of order in the file system. This is the exact same type of fragmentation you can encounter when saving normal files on your computer.

Potential causes: Multiple file growths on the same drive (smaller growths make things worse); creating databases on an existing drive that is already fragmented; using a file system allocation unit size smaller than 64 KB (the size of one extent).

Solution: Periodically run Windows Disk Defragmenter on the drives that store the database files (this requires shutting down the database engine instance for the duration of the maintenance).


Layer 3: Virtual Machine Disks

If your Windows instances are virtualized, there is an additional layer of potential fragmentation, because the drives in Layer 2 are abstracted -- the contents of the drives themselves are files in a file system. And like Layer 2, those files can potentially become fragmented.

Potential causes: Dynamically-allocated virtual disks that grow; creating a new virtual disk on a drive that is already fragmented.

Solution: Only used fixed-size virtual disks, or use a separate area dedicated to dynamically-sized virtual disks. For the fixed-size area, if things aren't fragmented at all, nothing needs to be done. For the dynamically-sized area, periodically run Windows Disk Defragmenter on the drives that store the virtual disk files (this may or may not require taking the virtual machine(s) offline during the process).


As you can see, there are multiple layers where fragmentation can be introduced, and they all act at the same time. A comprehensive maintenance solution should cover all the layers in your environment. Generally speaking, a layer closer to the physical device is easier to handle, and it's best dealt with when first setting up the instance of Windows and the database server itself (aka set it and forget it).

From a database administrator perspective, the biggest takeaway is to deploy an automated intelligent index maintenance solution, and proactively grow the database files as needed in larger chunks, instead of relying on small auto-growth increments.


Locked out of a SQL instance? Two methods to get back in without restarting the service

Hopefully you don't lock yourself out of your servers frequently. It does happen from time to time, though, and it's handy to know how to get out of the jam quickly.

The following methods will allow you to fix things while the SQL instance stays online and operational. Note that they do rely on there being an active login for NT AUTHORITY\SYSTEM (the local system account) in the SQL instance. This login is created by default, but it can be removed or disabled. (Edit: on 2008 R2, the system account login is sysadmin level, while on 2012 it has been restricted.)

If you don't have a local system account login in the SQL instance (or another suitable login you can use or impersonate), you will have to resort to the method that involves restarting the instance in single-user mode, and use either SQLCMD or a query window in Management Studio to fix things up.

Warning: be extremely careful when using either of these methods, as you will have the ability to run commands as the local system account.


The Built-In, Slightly Complicated Way

Note: service Interactive Mode is deprecated; I tested that it works on Windows Server 2008 R2 SP1. It also works on Windows 7 if you have instances installed on your workstation. (Edit: This method does not work on Windows Server 2012 RC. Use the other method.)

  1. Open a command prompt on the SQL server. We're going to create a Windows service that launches a command prompt instance running as the local system account. Do this by typing the following command (no outer quotes, the spaces are important) and pressing enter: "sc create RecoverSqlPassword binPath= "cmd.exe /k start" type= own type= interactive"
  2. Assuming you got a success message in the previous step, start the service by typing the following command and pressing enter: "sc start RecoverSqlPassword"
  3. While the command in the previous step will say it failed (this is okay), in the task bar, you will see a new window called Interactive Services Detection trying to get your attention. Bring it into focus and click the View the Message button. This will switch you to a different desktop with a command prompt window and another Services dialog that allows you to return to your own desktop. (While you can switch back and forth between the desktops as much as you want, unfortunately you can't copy/paste between them.)
  4. In the command prompt window, either use SQLCMD or browse to and launch Management Studio to fix the login issue. When you're done, close/exit the command prompt and click the "Return now" button.
  5. Clean up by deleting the service that was created in Step 1 by typing the following command and pressing enter: "sc delete RecoverSqlPassword"


The Not-Quite-Built-In, Easier Way

  1. Download the SysInternals tool PsExec and unzip the .exe file to a suitable location on the SQL server.
  2. Open a command prompt and change directory to the aforementioned location.
  3. Run the following command to start a command prompt process as the local system account: "psexec -s cmd.exe" (note: unlike the other method, this starts the process within the same command window, and without a desktop switch).
  4. Either use SQLCMD or browse to and launch Management Studio to fix the login issue. When you're done, either use the "exit" command to return to the original command prompt context, or just close the command prompt window entirely.

Index maintenance can help meet your performance SLAs

Last week, Brent Ozar told us to stop worrying about index fragmentation and get on with our lives.

He makes good points, particularly about taking full advantage of the ridiculously low price of memory right now. This makes external fragmentation a thing of the past, while the server is running. SSDs make fragmentation a thing of the past, full stop, so if you're running those, you can abandon this article now. (See you in the next paragraph!)

I want to bring up an important point that wasn't mentioned in the article: if your SQL Server instance is turned off, deliberately or otherwise, all caches, including the buffer pool are now cold, and have to be reconstituted from somewhere -- that somewhere is from disk.

This is crunch time: the data needs to be read from disk as fast as possible into the buffer pool so it can be served directly from memory the next time it's requested. This could be the most critical period of time if you have performance SLAs to meet, because as Brent mentioned, random read performance can be orders of magnitude slower than sequential reads1. Putting in place an intelligent index maintenance process helps the system perform more sequential reads, and thus improve performance. The longer the reads take, the more your SLAs are under pressure, or perhaps broken entirely.

That brings up another important point: if you're ever negotiating performance SLAs, make sure to take cold starts into account during the discussion. The business may be okay with relaxed SLAs in the 10 minutes following a restart, for example. But maybe they aren't okay with that: index maintenance could play a part in meeting the business requirements without investing in more expensive technologies. In any event, if you're going to agree to something, this is definitely one aspect you want to put in writing. Moreover, the process should be tested -- now, before you agree to anything, and periodically in the future -- to make sure that you can keep your word, and give you a proactive look at how long you have until the SLA can no longer be met.

The last thing you want during a failover is a flurry of heated questions from a CIO standing over your shoulder, but if you have a written document you can point to that says "this is what was agreed," and that's how the system is performing, that's the end of the discussion right there.


1 The transfer rate cited -- under 2 MB/s random reads -- is a somewhat misleading example because no one would set up shared storage with a 4 KB stripe size. At least I hope they wouldn't... A more typical stripe size is 64 KB, which would yield a higher transfer rate than the cited number, but would still be much slower than sequential reads.


Finding unattached database files using xp_cmdshell

Sometimes database files get left behind in the file system, maybe due to choosing to restore to a different location, or someone detached a database and forgot about it. Maybe there are rogue files in your SQL Server data folder.

To find out if there are any, here is a solution that uses the extended stored procedure xp_cmdshell to look in an instance's data and log folders for files that don't also appear in sys.master_files. This solution does not require that you have direct permission to the database files, but you will need sysadmin privileges in the instance. For safety reasons, the script only lists unattached files, it does not actually delete them. That part is up to you.


EXEC sp_configure 'show advanced options', 1;
EXEC sp_configure 'xp_cmdshell', 1;

DECLARE @HkeyLocal nvarchar(18) = N'HKEY_LOCAL_MACHINE';
DECLARE @MSSqlServerRegPath nvarchar(31) = N'SOFTWARE\Microsoft\MSSQLServer';
DECLARE @InstanceRegPath sysname = @MSSqlServerRegPath + N'\MSSQLServer';
DECLARE @dataFolder nvarchar(512);
DECLARE @logFolder nvarchar(512);

EXEC master.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultData', @dataFolder OUTPUT;
EXEC master.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultLog', @logFolder OUTPUT;

DECLARE @cmd nvarchar(1024);

DECLARE @files table (physical_name nvarchar(MAX));

SET @cmd = N'dir "' + @dataFolder + N'" /s /b';
INSERT INTO @files(physical_name) EXEC xp_cmdshell @cmd;

SET @cmd = N'dir "' + @logFolder + N'" /s /b';
INSERT INTO @files(physical_name) EXEC xp_cmdshell @cmd;

DELETE FROM @files WHERE (physical_name IS NULL) OR (RIGHT(physical_name, 4) = '.cer');

			FROM @files
	) f
				FROM sys.master_files mf
				WHERE mf.physical_name = f.physical_name


EXEC sp_configure 'xp_cmdshell', 0;
EXEC sp_configure 'show advanced options', 0;

What's in the system databases?

We know they're important -- they sit in that special folder and all -- but what do they really do? Why are they special? Let's take a closer look.



This database is the gatekeeper of SQL Server.

It contains all the system configuration settings and logins, in addition to other instance-wide objects such as linked servers.

Most importantly, though, it contains all the information needed to find all the other databases in the instance. The SQL Server service is started by default with several command-line parameters, but two of those are the file locations of the master database data file and log file. When SQL Server starts up, it reads the master database to find out where all the databases in the instance are, including all the system databases, and then goes through the recovery process on them.

On a production system, taking regular backups of master is essential.



As the name suggests, this database is the template from which all new databases are created.

What's in this database? By default it's empty, and you can put pretty much anything you want in there -- new databases will have those objects in it by default. (Note: this isn't necessarily a best-practice; I'm just illustrating how it works.) Typically, this database is left empty, and database settings are adjusted to suit the environment. The recovery model is the most common setting changed.

It's recommended to take backups of this database, even though it's relatively easy to replace.



This database stores everything having to do with SQL Agent, including all the jobs and their history, schedules, and alerts. It's also used as a staging area for other features such as Database Mail.

The biggest thing to be aware of with msdb is the size of the job history. Depending on what you have running on your system, this can grow extremely quickly, and can in some cases be very difficult to clean up. Watch the size of this database! A default installation of SQL Server lands this database's files on the C drive, which is not a drive you want to fill up unexpectedly.

In terms of criticality, this is the second-most important system database behind master, and regular backups are essential on a production system. As mentioned, it's also good to monitor the size of this database, and keep tabs on what kind of job history is being tracked.



This system database is special in that unlike any of the others, it's recreated from scratch every time the instance starts. Instant data file initialization can improve SQL Server restart time immensely because tempdb gets recreated on start up. Less restart time means higher availability.

Not only is tempdb temporary itself, it's also responsible for holding temporary objects that get created by user scripts and internal operations. This includes things like temporary tables and table variables, row version information for snapshot isolation, and extra space as needed to satisfy user queries that do big sorts or joins.

While you can't take backups of tempdb, it's important for it to be well-tuned to your workload. Tuning tempdb is a topic unto itself, so I won't discuss that further here.



This database is used by SQL Server Replication to store information about transactions that have occurred which need to be sent out to subscribers. If Replication is not configured, this database won't exist in the list. Note also that the name of this database can vary, as it's user-entered. You can run the following query to find distribution databases in your instance:

	FROM sys.databases
	WHERE is_distributor = 1


Resource (mssqlsystemresource)

This is SQL Server's internal system database, and is hidden from direct viewing. It contains internal objects such as system catalog views and stored procedures that appear in the sys schema in every database.

This database can't be backed up through SQL Server, and should not be modified.


Although this post certainly isn't a comprehensive list, hopefully I've given you a basic understanding of what's in the system databases. If you want to learn more, you can read the TechNet articles that go into more detail.