Sep
25
2012

Why would I want a read-only copy of my database?

When an OLTP database goes into production, automatically it's required to meet certain demands: work correctly and reliably, store data safely and securely, and meet the performance needs of the users.

Depending on what the database is used for, there often comes a time when the business asks us to go back to the data that has accumulated over time, and answer questions to drive the business forward. The business may even want -- or need -- those answers as soon as new data is entered into the system.

These new demands place an additional workload on the database that may or may not have been anticipated when the system was first designed. Sometimes this workload can consume a significant amount of resources.

No system can be designed from the get-go with perfect future-proofness. Sometimes changing the design of the system is more expensive than creating a new, totally separate system, to answer the business questions. Sometimes the production database already handles a workload near its maximum capacity, and couldn't maintain the original requirements after adding the new workload, even if the system was perfectly designed.

How could performance suffer with the additional workload?

  • Memory pressure: reading lots of historical data could potentially bump current data out of memory, causing disk thrashing to keep the required data in memory. This is even more perilous if the existing database is larger than the buffer pool. Using a different SQL instance or physical machine could alleviate this issue.
  • CPU usage: the workload may need to perform very intensive calculations on a lot of data all at once, thus slowing down other user queries. Business analysis queries may use parallelism, which will increase CPU usage, add contention for worker threads, and increase the probability of blocking and/or deadlocks. Using a different physical machine would alleviate this issue.
  • Locking, blocking, and deadlocks: if the production database runs at the default isolation level of READ COMMITTED* (or higher) and the business questions require us to use the same level (or higher), there is the potential to create blocks and/or deadlocks, particularly if the existing system is experiencing blocking or deadlocks already. This problem can be solved by using a separate copy of the data; in particular, a read-only database does not even take locks (because there are no writes allowed) and therefore there is no blocking, and no possibility of deadlocks.

The requirements for our project are as follows:

  • Answer the business questions
  • Minimize the impact on the production database
  • The data source should be transactionally consistent (most likely)
  • The data source should be kept up-to-date automatically (most likely)
  • The data source should be reconciled to a point-in-time according to the business needs (current as of today, or as of the last hour, etc.)
  • We don't need to make changes to data, so a read-only version would be okay

So how do we meet the requirements? A read-only copy of the production database may be the solution. In SQL Server, there are many ways of doing this, depending on our exact needs.

 

Transaction Log Shipping

Take a transaction log backup; copy the backup to another location; restore the backup. That's how log shipping works. Log shipping can be configured to restore the backups either in NORECOVERY mode, or STANDBY mode. The latter is what we're interested in, because it allows us to read the data.

Pros:

  • Very easy to set up, and simple to understand.
  • Usually we're already taking transaction log backups, so there's no additional load on production.
  • The log backups can be restored on another server.
  • Although all editions except Express have built-in tools to configure log shipping, it can still be accomplished in Express by rolling our own mechanism. After all, log shipping is just glorified transaction log backup and restore.

Cons:

  • May requires us to change our backup strategy, depending on how frequently the read-only copy needs to be updated.
  • Users have to be kicked out of the standby database to restore subsequent backups.
  • Requires enough storage to hold an entire second copy of the production database.

Notes:

  • The production database has to be using either the FULL or BULK_LOGGED recovery model.
  • CPU and disk resources are required to restore the backups.
  • Network resources may be required to copy the backups to a different location.
  • This gives us an entire copy of the database (this could be a pro or con).

 

Database Snapshots

This gives us a point-in-time, transactionally consistent copy of the entire database. We would keep the copy we read from up-to-date by dropping the snapshot, and taking a new snapshot at a later time.

Pros:

  • Only requires enough storage to hold the changes made to the production database between the rollover interval. This could allow us to take and keep multiple snapshots at different times and roll over as a window, without taking up N times the amount of storage as the original database.
  • Very little overhead to take a snapshot (therefore, it's also fast).
  • All database recovery models are supported.

Cons:

  • Introduces extra disk write workload depending on how many changes are made to the production database.
  • Can only be created within the same instance of SQL Server as the production database.
  • Requires Enterprise Edition or higher.

Notes:

  • Requires that the snapshot files be placed on an NTFS volume.
  • Each file of the source database has a corresponding snapshot file. Try using my sp_TakeSnapshot stored procedure as a base to automate the process of taking a snapshot.
  • This gives us an entire copy of the database.

 

Database Mirroring

As of SQL Server 2012, database mirroring is deprecated, but I'm going to mention it here anyway, because the proposed alternative still... has its issues. Database mirroring sends transaction log records over-the-wire to another instance of SQL Server, where they get replayed on the copy of the database. While the secondary database is not directly readable, database snapshots can be taken against it which are. So, the differences between this and snapshots are:

Pros:

  • Mirroring is usually used for high availability, so this could leverage existing infrastructure that would otherwise be underutilized.
  • The copy of the database can (in fact, must) be in a different instance (same machine, or another machine).
  • There's nothing to mess around with in the file system; instead, communication is done through endpoints.

Cons:

  • The production database must be using the FULL recovery model.
  • Requires enough storage for an entire second copy of the database (plus the snapshots).
  • While database mirroring is available in synchronous mode in Standard Edition and higher, we need snapshots for the purpose of this discussion, which requires Enterprise Edition. Enterprise also allows for asynchronous mirroring mode.

Notes:

  • The secondary has to do work to restore the log records, including operations such as index maintenance.
  • Network throughput may be a concern; it's usually better amortized over time than, e.g., log shipping.
  • Likely will need to poke holes in the servers' firewalls to allow communication through.
  • This gives us an entire copy of the database.

 

SQL Server Replication

Replication is a system built to sit on top of our database that reads committed transactions, convert them to a format any database system can understand, copies the changes to a subscriber, and applies those changes to another database. Unlike the other methods I've covered so far, replication doesn't actually give us a physically read-only copy of the database. It does, however give us a copy of the data which we can read. There are several different types of replication (snapshot, merge, and transactional), and for the purposes of this discussion I'm going to lump them all together into a single category.

Pros:

  • Can create a copy of a subset of our data, including vertical and/or horizontal partitioning of individual tables. Replication is the only SQL Server technology that lets us do this. This can be a huge pro for security of data and storage space, because we can publish only the objects we need to read later.
  • Express Edition can subscribe to all types of publications.
  • Standard Edition can publish all types of publications. **
  • Changes are applied to the subscriber without the need to kick people out. (Warning: this means we need to be mindful of the transaction isolation level we use when running queries.)
  • All database recovery models are supported.

Cons:

  • Requires configuration, storage, and management of an additional database (the distribution database).
  • Can be difficult to understand, configure, secure, and administer. (Don't let this short list of cons fool you.)

Depending on how replication is configured and what we need to accomplish, database schema changes may not propagate to the subscriber. Also, some operations such as index maintenance don't get applied at subscribers, which may or may not be desirable.

 

Availability Groups

This is SQL Server's newest offering, and it's a hybrid of database mirroring and SQL Server failover clustering all rolled into one feature. (This is the reason why database mirroring on its own is deprecated.)

Pros:

  • Allows scale-out of reads to multiple secondaries without the use of database snapshots.
  • Like database mirroring, the read scale-out can also be part of a high availability strategy.

Cons:

  • The production database must be using the FULL recovery model.
  • Requires Enterprise Edition.
  • Must run on a Windows Server failover cluster (WSFC).

Out of all the technologies I've listed, this one is probably the least likely to be a good solution to solve the problem of having a read-only copy of the database (and that's it), unless existing infrastructure is in place for other reasons.

 

So there we have it: 5 different technologies built into SQL Server to give us a read-only copy of our data. In addition to those, there are various storage-level technologies, including replication and snapshots, that can be used for the same purpose. Even if those technologies aren't suitable for your project now, it's worth asking your storage administrator to find out which options are readily available in the event you do need to use them in the future.

Another option is to enable one of the snapshot isolation levels, where only writers block writers. This doesn't provide a copy of the database, and it has performance implications, but it can be a very easy way out in some circumstances.

 

* READ COMMITTED is the default isolation level on the user-installable versions of SQL Server, while READ COMMITTED SNAPSHOT is the default for Azure databases (aka Windows Azure SQL Database).

** For completeness, Enterprise Edition is required for a peer-to-peer topology, but this is not a suitable technology to only solve the "read-only copy of our data" problem.

Sep
18
2012

Will the Real Query Text Please Stand Up?

(While I can't take credit for finding this issue originally -- I heard about it several months ago -- for the life of me I can't remember or find where it was mentioned, so if you know of a reference, please leave a comment.)

When I first mentioned that developers should try to move away from using single-line comments (using --) and towards block comments (using /* */) in T-SQL, it was understandable that I got a bunch of "ummm... what?" looks.

All comments are created equal, right?

Well, obviously not, or this post wouldn't need to exist.

Aside from the obvious differences, there's a particular issue with single-line comments that's indirect and somewhat subtle, yet it can be significant. If you've worked with SQL Server Profiler for any length of time, there is a decent chance you've come across this issue already. Whether you realized it or not is a separate question entirely.

Onwards.

The problem is that Profiler word-wraps captured SQL statements according to its window size. While that might seem okay at first glance, the fact is that artificially introducing extra newlines when the statement syntax depends on the arrangement of newlines... can lead to interesting things. Not only that, but the syntax highlighting is applied after the word-wrapping takes place, which exacerbates the problem.

Observe:

SELECT
	fulltext_catalog_id--, name, path, is_default, is_accent_sensitivity_on, data_space_id, file_id, principal_id, is_importing
	FROM sys.fulltext_catalogs

This query selects a single column from sys.fulltext_catalogs, while the other columns are commented out.

If I shrink the Profiler window (both to demonstrate the problem, and so screenshots nicely fit onto this page) and run the query, the statement appears like this:

That... doesn't look right.

If you had only seen the query in Profiler and didn't know what the original query was, do you know which query was actually executed?

The really dangerous thing is that if you inspect the erroneous-looking query closely, you'll find that it is a different, yet syntactically valid, query.

SELECT
	fulltext_catalog_id--, name, path, is_default, is_accent_sensitivity_on,
data_space_id, file_id, principal_id, is_importing
	FROM sys.fulltext_catalogs

(If you don't see it, this query aliases the fulltext_catalog_id column as data_space_id, and returns 4 columns in the result set.)

Here's what you see in Profiler when you run it:

Looks familiar, doesn't it?

 

Tips:

  • Use block (/* */) comments wherever possible in production code, particularly if the comment occurs within part of a query. IMO, it's easier and safer to just get in the habit of using block comments all the time rather than picking and choosing where to use each type, or having to teach this kind of esoteric information. I rarely take issue with personal coding style as long as code is legible, but this is an exceptional circumstance that can directly affect things beyond readability.
  • Keep lines short, and break up long lines (this is a good practice for readability anyway). This can be accomplished in 99.999% of cases by using a combination of structured parentheses, indentation, and aliases.
  • Any time you see a single-line comment (--) within the query text in Profiler, don't assume what you're being shown is what was actually executed. Copy the TextData cell (Ctrl+Shift+C) into a Management Studio query window to see the real query text.

(Note: the 2008 R2 version of Profiler is shown above; the 2012 version also exhibits this behaviour.)

Sep
12
2012

Fun with Multi-Column Foreign Keys and Nullable Columns

This is more of a reminder/warning/spread-the-word post to be really, really careful when dealing with multi-column foreign key relationships when one or more of the foreign columns allows NULL values.

Things... may not work as you might expect.

Consider the following script:

CREATE TABLE PrimaryTable
(
	Col1 int NOT NULL,
	Col2 int NOT NULL,
	
	CONSTRAINT PK_PrimaryTable
		PRIMARY KEY(Col1, Col2)
);

CREATE TABLE ForeignTable
(
	Id int IDENTITY PRIMARY KEY,
	Col1 int NOT NULL,
	Col2 int NULL,
	
	CONSTRAINT FK_ForeignTable_PrimaryTable
		FOREIGN KEY(Col1, Col2) REFERENCES PrimaryTable(Col1, Col2)
);

INSERT INTO PrimaryTable(Col1, Col2)
	VALUES (0, 0), (1, 1);


-- Statement 1
INSERT INTO ForeignTable(Col1, Col2)
	VALUES (0, 0);

-- Statement 2
INSERT INTO ForeignTable(Col1, Col2)
	VALUES (0, 1);

-- Statement 3
INSERT INTO ForeignTable(Col1, Col2)
	VALUES (0, NULL);

 

Statement 1 is standard, and succeeds as we'd expect. Similarly as expected, Statement 2 fails with a constraint violation as there's no row in PrimaryTable that has that combination of values.

What about Statement 3? If there was no foreign key, (0, NULL) would be an allowed combination of values, but what happens when there is a foreign key?

The answer is that the insert still succeeds.

I had expected it would fail just like Statement 2, because there isn't a (0, NULL) combination in PrimaryTable. In fact, it's not even an allowed combination to put in that table at all!

What happens is that NULL values are completely ignored from checking, and the constraint is satisfied as long as the non-NULL values match up to at least 1 row in the primary table. Edit: this is incorrect. If there are any NULLs in the foreign table columns, the constraint is not checked at all, which is even more perilous.

Multi-column foreign keys aren't exactly the best practice in the world as far as schema design goes, but you do encounter them in the wild occasionally. If a database is suspect from an integrity point-of-view (which usually means it has many NULLable columns), be aware of this behaviour when doing data cleanup or querying. The corollary is that if this kind of schema is found in a database, poor integrity may not be very far away either.

Add this to your own Blitz script:

SELECT
	fk.name AS ConstraintName,
	COUNT(*) AS ColumnCount
	FROM sys.foreign_keys fk
	INNER JOIN sys.foreign_key_columns fkc ON fkc.constraint_object_id = fk.object_id
	GROUP BY fk.name
	HAVING COUNT(*) >= 2;
Sep
10
2012

Enable Instant Data File Initialization (video)

Any time a new portion of a database file (data file or log file) is created, by default, SQL Server writes out zeros -- to start with a clean slate, so to speak.

As the name implies, instant data file initialization is a feature that allows SQL Server to skip the zeroing process for data files. Log files are always zeroed out (because of the way they work internally).

The reason why you'd want to turn on this feature is simple: skipping the zeroing process does less I/O, thereby speeding up the process. This gives many advantages:

  • Recover faster: The tempdb database must be recreated on instance restart. This could be particularly important if you have an availability SLA to maintain.
  • Restore faster: Data files are initialized before the actual backed-up data pages are copied into the data files.
  • Better response time: If a data file has to auto-grow because of user activity, the user won't have to wait as long for the operation to complete.

Now, of course there is a tradeoff here, and in this case it's security-related: because no zeroing of the files happens with this setting turned on, it may be possible (through erroneous SQL Server behaviour) to access the previously-written data on the disk, which could be absolutely anything. While this is a very, very small risk, it may be too much for your environment. If that's the case, this setting should be turned off (which is the default setting).

In this short video demo, I walk through the steps to enable the feature, and validate that it's been successfully enabled.

 

 

Summary of steps:

  1. Add the database engine service account local group/domain group/service SID to the Perform Volume Maintenance Tasks security policy. (Note: this can also be accomplished using Windows Active Directory Group Policy, if that's a better solution for your environment.)
  2. Restart the database engine service.
  3. Validate the feature is enabled using the script below. There shouldn't be any messages in the Error Log that indicate data files were zeroed.
DBCC TRACEON(3004);
DBCC TRACEON(3605);

CREATE DATABASE abc;
Sep
6
2012

Moving the System Databases (video)

Occasionally after installation, the system databases need to be moved from one location to another. In this video demo, I show you how to accomplish this task.

 

 

Key points:

  • Make sure the database engine service account has read/write access to the new file location(s).
  • The paths to the master database files are in the service startup parameters.
  • The paths to the other databases files are changed using ALTER DATABASE ... MODIFY FILE.
  • The database files themselves can't be copied until the database engine service is stopped.

 

Two things about security which I didn't mention in the video:

  • When granting permissions on the new data container, the principle for the database engine service account will vary depending on your environment. In my case, it was a local group, but you may need to use a domain group, or a service SID. It should be obvious from the source data container.
  • You may want to grant the Everyone group permission to list the folder contents starting from the root, and remove the inherited permission on each data container. This will allow you to browse to the drive when attaching a database (for example) in Management Studio, yet limit access to only the account that should see the files in each instance folder.