What changed in my database last week?

PANIC! A customer clicked through four layers of warning messages and accidentally deleted a bunch of data from the database, but didn’t bother to tell you about it until today.

Great. The database is in FULL or BULK_LOGGED recovery, and we have a full set of transaction log backups that contain all the transactions in the database for the time when things “happened.” Okay… now what? Log backups seem very opaque, as we can’t just open them up in Notepad and expect things to be human-readable.

Enter the undocumented table-valued function: fn_dump_dblog.

This function reads a transaction log backup file and returns a human-readable geek-readable description of the physical log records from the backup.

(The sister function fn_dblog does the same sort of thing, except it operates only on the active transaction log of a database, not a transaction log backup.)

Paul Randal wrote a detailed blog entry on the basics of how to use both of these functions, so I won’t cover that here. Where this post differs is in consuming the output of the function so it’s much more usable.

The first step is to read the entire backup and dump the output into a temporary table. This will make querying the same set of transactions (or more transactions if you load additional backups into the table) much faster, as the log reading part of things is rather slow.

--INSERT INTO #transactions SELECT *
--SELECT * INTO #transactions
	FROM
		fn_dump_dblog
		(
			NULL,	-- Start
			NULL,	-- End
			'DISK',	-- Device Type
			1,		-- File number in backup
			'',		-- Backup file
			NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
			NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
			NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
			NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
			NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
			NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
			NULL, NULL, NULL
		);

ALTER TABLE #transactions
	ADD
		StartDate datetime NULL,
		EndDate datetime NULL;

UPDATE #transactions
	SET
		StartDate = CAST([Begin Time] AS datetime),
		EndDate = CAST([End Time] AS datetime)

ALTER TABLE #transactions DROP COLUMN [Begin Time];
ALTER TABLE #transactions DROP COLUMN [End Time];

Now that we have the transactions available for querying more readily, let’s show what we need to see in an easy-to-consume format. This works best if you’ve restored a copy of the database in STANDBY mode to a point in time before the time of interest. If the script is run in the context of that database, the code will show you the names of the tables affected, the login SID of who made the change, and also proactively return a DBCC PAGE command for when you want to look at the raw data values. This makes it really easy to inch through the transaction log to figure out what changed using out-of-the-box tools. (Yes, there are 3rd-party tools that do this, too.)

DECLARE @startDate datetime = NULL;
DECLARE @endDate datetime = NULL;
DECLARE @minLSN varchar(22) = NULL; /* '00000000:00000000:0000' */
DECLARE @maxLSN varchar(22) = NULL; /* '00000000:00000000:0000' */

SELECT
	a.TransactionId,
	a.Seq,
	a.LSN,
	a.SID,
	a.StartDate AS TransactionStartDate,
	a.EndDate AS TransactionEndDate,
	a.Operation,
	a.TableName,
	a.FileNumber,
	a.PageNumber,
	a.SlotId,
	(
		CASE WHEN a.FileNumber IS NOT NULL THEN
			'DBCC PAGE (''' + DB_NAME() + N''', ' + CAST(a.FileNumber AS varchar(MAX)) + ', ' + CAST(a.PageNumber AS varchar(MAX)) + ', 3) WITH TABLERESULTS'
		END
	) AS DBCCPageCommand
	FROM
	(
		SELECT
			UPPER(t.[Transaction ID]) AS TransactionId,
			ROW_NUMBER() OVER(PARTITION BY t.[Transaction ID] ORDER BY t.[Current LSN]) AS Seq,
			UPPER(t.[Current LSN]) AS LSN,
			bt.StartDate,
			ct.EndDate,
			t.Operation,
			CAST(CONVERT(varbinary, UPPER(LEFT(t.[Page ID], 4)), 2) AS int) AS FileNumber,
			CAST(CONVERT(varbinary, UPPER(RIGHT(t.[Page ID], 8)), 2) AS int) AS PageNumber,
			t.[Slot ID] AS SlotId,
			o.name AS TableName,
			bt.[Transaction SID] AS SID
			FROM #transactions t
			LEFT OUTER JOIN #transactions bt ON ((bt.[Transaction ID] = t.[Transaction ID]) AND (bt.Operation = 'LOP_BEGIN_XACT'))
			LEFT OUTER JOIN #transactions ct ON ((ct.[Transaction ID] = t.[Transaction ID]) AND (ct.Operation = 'LOP_COMMIT_XACT'))
			LEFT OUTER JOIN
			(
				sys.partitions p
				INNER JOIN sys.objects o ON o.object_id = p.object_id
			) ON p.partition_id = t.PartitionId
			WHERE
				(t.Context IN ('LCX_CLUSTERED', 'LCX_HEAP')) AND
				(t.[Transaction ID] != N'0000:00000000') AND
				((@startDate IS NULL) OR (t.StartDate IS NULL) OR (t.StartDate >= @startDate)) AND
				((@endDate IS NULL) OR (t.EndDate IS NULL) OR (t.EndDate <= @endDate)) AND
				((@minLSN IS NULL) OR (t.[Current LSN] >= @minLSN)) AND
				((@maxLSN IS NULL) OR (t.[Current LSN] <= @maxLSN))
	) a
	ORDER BY
		a.StartDate,
		a.EndDate,
		a.TransactionId,
		a.LSN;

If you feel like playing around, there are many more fields that come back from the function; I’ve chosen to output the set of columns that I find most useful when I need to use this script.

Once you’ve identified when the change occurred, you can run a data comparison tool between the STANDBY database, and the current database (or a STANDBY copy from immediately after the change).

A copy of the full script can be downloaded here.

Creating a Backup Maintenance Plan (Video)

Maintenance plans are a great way of getting started with a backup solution for your environment.

While maintenance plans don’t offer great flexibility, particularly when it comes to managing backups across many different servers, they’re often a good starting point because they’re easy to create and simple to understand.

In this demo video, I walk through creating a skeleton maintenance plan that contains 3 subplans to perform full, differential, and transaction log backups. The only thing left to do is schedule each of the automatically-generated SQL Agent jobs to meet your backup and recovery needs.