Oct
18
2012

Creating a Backup Maintenance Plan (video)

Maintenance plans are a great way of getting started with a backup solution for your environment.

While maintenance plans don't offer great flexibility, particularly when it comes to managing backups across many different servers, they're often a good starting point because they're easy to create and simple to understand.

In this demo video, I walk through creating a skeleton maintenance plan that contains 3 subplans to perform full, differential, and transaction log backups. The only thing left to do is schedule each of the automatically-generated SQL Agent jobs to meet your backup and recovery needs.

 

 

Sep
27
2012

Capture deadlocks 24/7 with Extended Events

Deadlocks can be notoriously difficult to reproduce in our development environments.

Most of the time:

  • There are multiple actual users involved with the problem.
  • The production workload, both in terms of volume and/or composition, can't be realistically reproduced in development either.
  • The tools we used to need to use to capture deadlocks (server-side trace, SQL Server Profiler) are prohibitively expensive to run all the time, or even for a short time, or we had to mess around with trace flags.

Here's where Extended Events come to the rescue, and really makes us able to be proactive to deal with deadlocks.

Extended Events allows us to capture deadlock graphs continuously, as they happen, 24 hours/day, 7 days/week, with very little performance overhead. Not only that, but the system we're going to set up is able to survive an instance restart! This is huge in terms of ease of management, and being able to have this information available when it's needed can be way more than half the battle.

Here is the meat of the script that sets up our event session for a SQL Server 2012 instance:

CREATE EVENT SESSION [DeadlockMonitor] ON SERVER 
	ADD EVENT sqlserver.xml_deadlock_report 
	ADD TARGET package0.event_file
	(
		SET
			FILENAME = N'\\CentralFileShare\MyTargetFile.xel',
			MAX_ROLLOVER_FILES = 0
	)
	WITH
	(
		EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS,
		MAX_DISPATCH_LATENCY = 15 SECONDS,
		STARTUP_STATE = ON
	);

ALTER EVENT SESSION [DeadlockMonitor] ON SERVER
	STATE = START;

Note that the database engine service account needs to be granted write access to the target location.

For ease of multi-server management, I recommend setting up a centralized file share to store all the output files. The full script was written with this in mind, and uses the server name as the file name automatically.

Security note: Deadlock graphs may contain sensitive information, such as data values in DML statements. The target location should be secured appropriately -- consider using subfolders in the target path to separate by server if a single folder isn't okay for your environment.

 

So, now that we've captured some deadlocks (yay?), how do we figure out what happened? Here is a first start. It reads the output and returns the event data (in this case, a deadlock graph) as XML. This script can be used for all types of Extended Event file targets, not just for deadlocks.

SELECT
	object_name,
	CAST(event_data AS xml) AS EventXml
	FROM
		sys.fn_xe_file_target_read_file
		(
			'.xel',
			'.xem', /* Required for 2008, optional for 2012 */
			NULL, NULL
		)

XPath can be used to further break things down, or the event data can be copied as text and saved as a .xdl file, which we can then open up in Management Studio to see the pretty deadlock graph. If you have a centralized management box, this script could be the basis of an Extended Events aggregation and reporting system. (Hopefully, you aren't dealing with that many deadlocks, but this type of system could be used for other purposes.)

In a future post, we'll look at how to analyze deadlock graphs.

Sep
25
2012

Why would I want a read-only copy of my database?

When an OLTP database goes into production, automatically it's required to meet certain demands: work correctly and reliably, store data safely and securely, and meet the performance needs of the users.

Depending on what the database is used for, there often comes a time when the business asks us to go back to the data that has accumulated over time, and answer questions to drive the business forward. The business may even want -- or need -- those answers as soon as new data is entered into the system.

These new demands place an additional workload on the database that may or may not have been anticipated when the system was first designed. Sometimes this workload can consume a significant amount of resources.

No system can be designed from the get-go with perfect future-proofness. Sometimes changing the design of the system is more expensive than creating a new, totally separate system, to answer the business questions. Sometimes the production database already handles a workload near its maximum capacity, and couldn't maintain the original requirements after adding the new workload, even if the system was perfectly designed.

How could performance suffer with the additional workload?

  • Memory pressure: reading lots of historical data could potentially bump current data out of memory, causing disk thrashing to keep the required data in memory. This is even more perilous if the existing database is larger than the buffer pool. Using a different SQL instance or physical machine could alleviate this issue.
  • CPU usage: the workload may need to perform very intensive calculations on a lot of data all at once, thus slowing down other user queries. Business analysis queries may use parallelism, which will increase CPU usage, add contention for worker threads, and increase the probability of blocking and/or deadlocks. Using a different physical machine would alleviate this issue.
  • Locking, blocking, and deadlocks: if the production database runs at the default isolation level of READ COMMITTED* (or higher) and the business questions require us to use the same level (or higher), there is the potential to create blocks and/or deadlocks, particularly if the existing system is experiencing blocking or deadlocks already. This problem can be solved by using a separate copy of the data; in particular, a read-only database does not even take locks (because there are no writes allowed) and therefore there is no blocking, and no possibility of deadlocks.

The requirements for our project are as follows:

  • Answer the business questions
  • Minimize the impact on the production database
  • The data source should be transactionally consistent (most likely)
  • The data source should be kept up-to-date automatically (most likely)
  • The data source should be reconciled to a point-in-time according to the business needs (current as of today, or as of the last hour, etc.)
  • We don't need to make changes to data, so a read-only version would be okay

So how do we meet the requirements? A read-only copy of the production database may be the solution. In SQL Server, there are many ways of doing this, depending on our exact needs.

 

Transaction Log Shipping

Take a transaction log backup; copy the backup to another location; restore the backup. That's how log shipping works. Log shipping can be configured to restore the backups either in NORECOVERY mode, or STANDBY mode. The latter is what we're interested in, because it allows us to read the data.

Pros:

  • Very easy to set up, and simple to understand.
  • Usually we're already taking transaction log backups, so there's no additional load on production.
  • The log backups can be restored on another server.
  • Although all editions except Express have built-in tools to configure log shipping, it can still be accomplished in Express by rolling our own mechanism. After all, log shipping is just glorified transaction log backup and restore.

Cons:

  • May requires us to change our backup strategy, depending on how frequently the read-only copy needs to be updated.
  • Users have to be kicked out of the standby database to restore subsequent backups.
  • Requires enough storage to hold an entire second copy of the production database.

Notes:

  • The production database has to be using either the FULL or BULK_LOGGED recovery model.
  • CPU and disk resources are required to restore the backups.
  • Network resources may be required to copy the backups to a different location.
  • This gives us an entire copy of the database (this could be a pro or con).

 

Database Snapshots

This gives us a point-in-time, transactionally consistent copy of the entire database. We would keep the copy we read from up-to-date by dropping the snapshot, and taking a new snapshot at a later time.

Pros:

  • Only requires enough storage to hold the changes made to the production database between the rollover interval. This could allow us to take and keep multiple snapshots at different times and roll over as a window, without taking up N times the amount of storage as the original database.
  • Very little overhead to take a snapshot (therefore, it's also fast).
  • All database recovery models are supported.

Cons:

  • Introduces extra disk write workload depending on how many changes are made to the production database.
  • Can only be created within the same instance of SQL Server as the production database.
  • Requires Enterprise Edition or higher.

Notes:

  • Requires that the snapshot files be placed on an NTFS volume.
  • Each file of the source database has a corresponding snapshot file. Try using my sp_TakeSnapshot stored procedure as a base to automate the process of taking a snapshot.
  • This gives us an entire copy of the database.

 

Database Mirroring

As of SQL Server 2012, database mirroring is deprecated, but I'm going to mention it here anyway, because the proposed alternative still... has its issues. Database mirroring sends transaction log records over-the-wire to another instance of SQL Server, where they get replayed on the copy of the database. While the secondary database is not directly readable, database snapshots can be taken against it which are. So, the differences between this and snapshots are:

Pros:

  • Mirroring is usually used for high availability, so this could leverage existing infrastructure that would otherwise be underutilized.
  • The copy of the database can (in fact, must) be in a different instance (same machine, or another machine).
  • There's nothing to mess around with in the file system; instead, communication is done through endpoints.

Cons:

  • The production database must be using the FULL recovery model.
  • Requires enough storage for an entire second copy of the database (plus the snapshots).
  • While database mirroring is available in synchronous mode in Standard Edition and higher, we need snapshots for the purpose of this discussion, which requires Enterprise Edition. Enterprise also allows for asynchronous mirroring mode.

Notes:

  • The secondary has to do work to restore the log records, including operations such as index maintenance.
  • Network throughput may be a concern; it's usually better amortized over time than, e.g., log shipping.
  • Likely will need to poke holes in the servers' firewalls to allow communication through.
  • This gives us an entire copy of the database.

 

SQL Server Replication

Replication is a system built to sit on top of our database that reads committed transactions, convert them to a format any database system can understand, copies the changes to a subscriber, and applies those changes to another database. Unlike the other methods I've covered so far, replication doesn't actually give us a physically read-only copy of the database. It does, however give us a copy of the data which we can read. There are several different types of replication (snapshot, merge, and transactional), and for the purposes of this discussion I'm going to lump them all together into a single category.

Pros:

  • Can create a copy of a subset of our data, including vertical and/or horizontal partitioning of individual tables. Replication is the only SQL Server technology that lets us do this. This can be a huge pro for security of data and storage space, because we can publish only the objects we need to read later.
  • Express Edition can subscribe to all types of publications.
  • Standard Edition can publish all types of publications. **
  • Changes are applied to the subscriber without the need to kick people out. (Warning: this means we need to be mindful of the transaction isolation level we use when running queries.)
  • All database recovery models are supported.

Cons:

  • Requires configuration, storage, and management of an additional database (the distribution database).
  • Can be difficult to understand, configure, secure, and administer. (Don't let this short list of cons fool you.)

Depending on how replication is configured and what we need to accomplish, database schema changes may not propagate to the subscriber. Also, some operations such as index maintenance don't get applied at subscribers, which may or may not be desirable.

 

Availability Groups

This is SQL Server's newest offering, and it's a hybrid of database mirroring and SQL Server failover clustering all rolled into one feature. (This is the reason why database mirroring on its own is deprecated.)

Pros:

  • Allows scale-out of reads to multiple secondaries without the use of database snapshots.
  • Like database mirroring, the read scale-out can also be part of a high availability strategy.

Cons:

  • The production database must be using the FULL recovery model.
  • Requires Enterprise Edition.
  • Must run on a Windows Server failover cluster (WSFC).

Out of all the technologies I've listed, this one is probably the least likely to be a good solution to solve the problem of having a read-only copy of the database (and that's it), unless existing infrastructure is in place for other reasons.

 

So there we have it: 5 different technologies built into SQL Server to give us a read-only copy of our data. In addition to those, there are various storage-level technologies, including replication and snapshots, that can be used for the same purpose. Even if those technologies aren't suitable for your project now, it's worth asking your storage administrator to find out which options are readily available in the event you do need to use them in the future.

Another option is to enable one of the snapshot isolation levels, where only writers block writers. This doesn't provide a copy of the database, and it has performance implications, but it can be a very easy way out in some circumstances.

 

* READ COMMITTED is the default isolation level on the user-installable versions of SQL Server, while READ COMMITTED SNAPSHOT is the default for Azure databases (aka Windows Azure SQL Database).

** For completeness, Enterprise Edition is required for a peer-to-peer topology, but this is not a suitable technology to only solve the "read-only copy of our data" problem.

Sep
10
2012

Enable Instant Data File Initialization (video)

Any time a new portion of a database file (data file or log file) is created, by default, SQL Server writes out zeros -- to start with a clean slate, so to speak.

As the name implies, instant data file initialization is a feature that allows SQL Server to skip the zeroing process for data files. Log files are always zeroed out (because of the way they work internally).

The reason why you'd want to turn on this feature is simple: skipping the zeroing process does less I/O, thereby speeding up the process. This gives many advantages:

  • Recover faster: The tempdb database must be recreated on instance restart. This could be particularly important if you have an availability SLA to maintain.
  • Restore faster: Data files are initialized before the actual backed-up data pages are copied into the data files.
  • Better response time: If a data file has to auto-grow because of user activity, the user won't have to wait as long for the operation to complete.

Now, of course there is a tradeoff here, and in this case it's security-related: because no zeroing of the files happens with this setting turned on, it may be possible (through erroneous SQL Server behaviour) to access the previously-written data on the disk, which could be absolutely anything. While this is a very, very small risk, it may be too much for your environment. If that's the case, this setting should be turned off (which is the default setting).

In this short video demo, I walk through the steps to enable the feature, and validate that it's been successfully enabled.

 

 

Summary of steps:

  1. Add the database engine service account local group/domain group/service SID to the Perform Volume Maintenance Tasks security policy. (Note: this can also be accomplished using Windows Active Directory Group Policy, if that's a better solution for your environment.)
  2. Restart the database engine service.
  3. Validate the feature is enabled using the script below. There shouldn't be any messages in the Error Log that indicate data files were zeroed.
DBCC TRACEON(3004);
DBCC TRACEON(3605);

CREATE DATABASE abc;
Sep
6
2012

Moving the System Databases (video)

Occasionally after installation, the system databases need to be moved from one location to another. In this video demo, I show you how to accomplish this task.

 

 

Key points:

  • Make sure the database engine service account has read/write access to the new file location(s).
  • The paths to the master database files are in the service startup parameters.
  • The paths to the other databases files are changed using ALTER DATABASE ... MODIFY FILE.
  • The database files themselves can't be copied until the database engine service is stopped.

 

Two things about security which I didn't mention in the video:

  • When granting permissions on the new data container, the principle for the database engine service account will vary depending on your environment. In my case, it was a local group, but you may need to use a domain group, or a service SID. It should be obvious from the source data container.
  • You may want to grant the Everyone group permission to list the folder contents starting from the root, and remove the inherited permission on each data container. This will allow you to browse to the drive when attaching a database (for example) in Management Studio, yet limit access to only the account that should see the files in each instance folder.