Sep
4
2012

Three Layers of Fragmentation

What is Fragmentation?

When the logical order of a storage unit doesn't match the physical order on a storage device, the storage unit is said to be fragmented. In other words, if there are storage units that contain A B C D E, but the physical order is D E C B A, this means the storage device may have to do random access instead of sequential access. This can be a big deal if the device is slow at doing random accesses vs. sequential, as is the case with hard drives of the spinning/magnetic variety.

 

Layer 1: Fragmented Indexes

This is the most granular level. Your database's indexes are fragmented when the pages and/or extents are out of order within the data file. The amount of fragmentation can be measured by using sys.dm_db_index_physical_stats.

Potential causes: Random index inserts; updates that require a row's size to increase.

Solution: Install and periodically run an intelligent index maintenance solution, such as Ola Hallengren's.

 

Layer 2: Operating System Files

A database is physically persisted in files in a file system. The files are fragmented when the allocation units that make up the files are out of order in the file system. This is the exact same type of fragmentation you can encounter when saving normal files on your computer.

Potential causes: Multiple file growths on the same drive (smaller growths make things worse); creating databases on an existing drive that is already fragmented; using a file system allocation unit size smaller than 64 KB (the size of one extent).

Solution: Periodically run Windows Disk Defragmenter on the drives that store the database files (this requires shutting down the database engine instance for the duration of the maintenance).

 

Layer 3: Virtual Machine Disks

If your Windows instances are virtualized, there is an additional layer of potential fragmentation, because the drives in Layer 2 are abstracted -- the contents of the drives themselves are files in a file system. And like Layer 2, those files can potentially become fragmented.

Potential causes: Dynamically-allocated virtual disks that grow; creating a new virtual disk on a drive that is already fragmented.

Solution: Only used fixed-size virtual disks, or use a separate area dedicated to dynamically-sized virtual disks. For the fixed-size area, if things aren't fragmented at all, nothing needs to be done. For the dynamically-sized area, periodically run Windows Disk Defragmenter on the drives that store the virtual disk files (this may or may not require taking the virtual machine(s) offline during the process).

 

As you can see, there are multiple layers where fragmentation can be introduced, and they all act at the same time. A comprehensive maintenance solution should cover all the layers in your environment. Generally speaking, a layer closer to the physical device is easier to handle, and it's best dealt with when first setting up the instance of Windows and the database server itself (aka set it and forget it).

From a database administrator perspective, the biggest takeaway is to deploy an automated intelligent index maintenance solution, and proactively grow the database files as needed in larger chunks, instead of relying on small auto-growth increments.

Aug
29
2012

Locked out of a SQL instance? Two methods to get back in without restarting the service

Hopefully you don't lock yourself out of your servers frequently. It does happen from time to time, though, and it's handy to know how to get out of the jam quickly.

The following methods will allow you to fix things while the SQL instance stays online and operational. Note that they do rely on there being an active login for NT AUTHORITY\SYSTEM (the local system account) in the SQL instance. This login is created by default, but it can be removed or disabled. (Edit: on 2008 R2, the system account login is sysadmin level, while on 2012 it has been restricted.)

If you don't have a local system account login in the SQL instance (or another suitable login you can use or impersonate), you will have to resort to the method that involves restarting the instance in single-user mode, and use either SQLCMD or a query window in Management Studio to fix things up.

Warning: be extremely careful when using either of these methods, as you will have the ability to run commands as the local system account.

 

The Built-In, Slightly Complicated Way

Note: service Interactive Mode is deprecated; I tested that it works on Windows Server 2008 R2 SP1. It also works on Windows 7 if you have instances installed on your workstation. (Edit: This method does not work on Windows Server 2012 RC. Use the other method.)

  1. Open a command prompt on the SQL server. We're going to create a Windows service that launches a command prompt instance running as the local system account. Do this by typing the following command (no outer quotes, the spaces are important) and pressing enter: "sc create RecoverSqlPassword binPath= "cmd.exe /k start" type= own type= interactive"
  2. Assuming you got a success message in the previous step, start the service by typing the following command and pressing enter: "sc start RecoverSqlPassword"
  3. While the command in the previous step will say it failed (this is okay), in the task bar, you will see a new window called Interactive Services Detection trying to get your attention. Bring it into focus and click the View the Message button. This will switch you to a different desktop with a command prompt window and another Services dialog that allows you to return to your own desktop. (While you can switch back and forth between the desktops as much as you want, unfortunately you can't copy/paste between them.)
  4. In the command prompt window, either use SQLCMD or browse to and launch Management Studio to fix the login issue. When you're done, close/exit the command prompt and click the "Return now" button.
  5. Clean up by deleting the service that was created in Step 1 by typing the following command and pressing enter: "sc delete RecoverSqlPassword"

 

The Not-Quite-Built-In, Easier Way

  1. Download the SysInternals tool PsExec and unzip the .exe file to a suitable location on the SQL server.
  2. Open a command prompt and change directory to the aforementioned location.
  3. Run the following command to start a command prompt process as the local system account: "psexec -s cmd.exe" (note: unlike the other method, this starts the process within the same command window, and without a desktop switch).
  4. Either use SQLCMD or browse to and launch Management Studio to fix the login issue. When you're done, either use the "exit" command to return to the original command prompt context, or just close the command prompt window entirely.
Aug
25
2012

Index maintenance can help meet your performance SLAs

Last week, Brent Ozar told us to stop worrying about index fragmentation and get on with our lives.

He makes good points, particularly about taking full advantage of the ridiculously low price of memory right now. This makes external fragmentation a thing of the past, while the server is running. SSDs make fragmentation a thing of the past, full stop, so if you're running those, you can abandon this article now. (See you in the next paragraph!)

I want to bring up an important point that wasn't mentioned in the article: if your SQL Server instance is turned off, deliberately or otherwise, all caches, including the buffer pool are now cold, and have to be reconstituted from somewhere -- that somewhere is from disk.

This is crunch time: the data needs to be read from disk as fast as possible into the buffer pool so it can be served directly from memory the next time it's requested. This could be the most critical period of time if you have performance SLAs to meet, because as Brent mentioned, random read performance can be orders of magnitude slower than sequential reads1. Putting in place an intelligent index maintenance process helps the system perform more sequential reads, and thus improve performance. The longer the reads take, the more your SLAs are under pressure, or perhaps broken entirely.

That brings up another important point: if you're ever negotiating performance SLAs, make sure to take cold starts into account during the discussion. The business may be okay with relaxed SLAs in the 10 minutes following a restart, for example. But maybe they aren't okay with that: index maintenance could play a part in meeting the business requirements without investing in more expensive technologies. In any event, if you're going to agree to something, this is definitely one aspect you want to put in writing. Moreover, the process should be tested -- now, before you agree to anything, and periodically in the future -- to make sure that you can keep your word, and give you a proactive look at how long you have until the SLA can no longer be met.

The last thing you want during a failover is a flurry of heated questions from a CIO standing over your shoulder, but if you have a written document you can point to that says "this is what was agreed," and that's how the system is performing, that's the end of the discussion right there.

 

1 The transfer rate cited -- under 2 MB/s random reads -- is a somewhat misleading example because no one would set up shared storage with a 4 KB stripe size. At least I hope they wouldn't... A more typical stripe size is 64 KB, which would yield a higher transfer rate than the cited number, but would still be much slower than sequential reads.

Aug
21
2012

Finding unattached database files using xp_cmdshell

Sometimes database files get left behind in the file system, maybe due to choosing to restore to a different location, or someone detached a database and forgot about it. Maybe there are rogue files in your SQL Server data folder.

To find out if there are any, here is a solution that uses the extended stored procedure xp_cmdshell to look in an instance's data and log folders for files that don't also appear in sys.master_files. This solution does not require that you have direct permission to the database files, but you will need sysadmin privileges in the instance. For safety reasons, the script only lists unattached files, it does not actually delete them. That part is up to you.

 

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'xp_cmdshell', 1;
RECONFIGURE;
GO

DECLARE @HkeyLocal nvarchar(18) = N'HKEY_LOCAL_MACHINE';
DECLARE @MSSqlServerRegPath nvarchar(31) = N'SOFTWARE\Microsoft\MSSQLServer';
DECLARE @InstanceRegPath sysname = @MSSqlServerRegPath + N'\MSSQLServer';
DECLARE @dataFolder nvarchar(512);
DECLARE @logFolder nvarchar(512);

EXEC master.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultData', @dataFolder OUTPUT;
EXEC master.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultLog', @logFolder OUTPUT;


DECLARE @cmd nvarchar(1024);

DECLARE @files table (physical_name nvarchar(MAX));

SET @cmd = N'dir "' + @dataFolder + N'" /s /b';
INSERT INTO @files(physical_name) EXEC xp_cmdshell @cmd;

SET @cmd = N'dir "' + @logFolder + N'" /s /b';
INSERT INTO @files(physical_name) EXEC xp_cmdshell @cmd;

DELETE FROM @files WHERE (physical_name IS NULL) OR (RIGHT(physical_name, 4) = '.cer');


SELECT
	f.physical_name
	FROM
	(
		SELECT DISTINCT
			physical_name
			FROM @files
	) f
	WHERE
		NOT EXISTS
		(
			SELECT *
				FROM sys.master_files mf
				WHERE mf.physical_name = f.physical_name
		);

GO

EXEC sp_configure 'xp_cmdshell', 0;
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
GO
Aug
14
2012

Scaling Server Management - Theory

No one wants to constantly monitor and maintain their server farm 24 hours a day, 7 days a week.

In the face of a growing business with a growing number of servers and a growing number of databases to manage, it can be easy to get overwhelmed at times, especially when the business grows fast. This is why scalable management techniques are so important, even if you only have a single server to manage right now.

What would your management techniques look like if you suddenly had to manage 10 times as many servers than you do now?

Trick question. You should already be managing your servers that way.

But how does one accomplish that when it’s so intangible right now?

Ultimately, there is only one concept to get handled: consistency. The other, meaner, uglier, eviler side of that coin is exception, which is something to be avoided like the plague. Interestingly, to explain the concept of consistency, it’s much easier to cite cases of exception; to explain what consistency isn’t. You might even call it... inconsistency. Hmmm.

  • Do you have a database named Production1 that is backed by files named Testing1.mdf and Testing1.ldf? Is it actually a testing database?
  • Are your companys’ server or instance names picked by randomly selecting words from the dictionary?
  • Do you have an instance that contains development, testing and production databases, or any combination of more than one of those? (“...that was the testing database, right?”)

"Wait a second," I hear you say, "that isn’t hard to manage. In fact, it’s easy to remember." Famous last words.

...well, maybe not famous.

...and maybe not last.

My point is that remembering is precisely the problem.

Can you keep track of a single inconsistency for each of 10 databases? Alright, sure, no problem. How about 100 databases? Yeah, it’ll suck, but I can do it. How about 1,000? Uhhh... How about 10,000? No way! -- and please don’t tell my manager.

Clearly, your own memory does not scale as well as technology does.

"Aha! I know, I’ll use automation!"

Now we’re on the right track. But automation is not a silver bullet. You can try to automate your way out of an inconsistent situation (without changing the situation itself). What you’ll end up doing, though, is spending most of your time maintaining the automation system. It’s trading an intense direct effort, for an intense indirect effort with minimal scale, plus a whole lot of opportunity for mistakes, downtime, and even data loss.

If you have a database to manage granular settings of your databases, you’re (probably) doing it wrong.

In the context of database management, automation is really only useful in two ways:

  • Making many things similar. (Set all user databases in an instance to use the FULL recovery model.)
  • Doing something to many similar things. (Take transaction log backups every 15 minutes for all databases using the FULL recovery model.)

That may seem really simple, but it’s also really powerful. This is the key to scale.

The number of production databases is the sum of:

SELECT COUNT(*) - 4 FROM sys.databases

over all of your production servers, not "Umm... give me a few minutes." (Hint: growing more fingers and toes for counting does not scale well.)

Your ability to keep everything consistent directly reflects and affects the extent to which you can leverage automation to manage your servers, and therefore not have to scale yourself.

  1. Eliminate barriers that prevent consistency.
  2. Create consistency.
  3. Create systems to enforce the consistency you worked so hard to achieve.

In a subsequent post, I’ll introduce and explain some of the tools and techniques you can use to accomplish these steps in SQL Server.