Is my SQL Server’s memory over-committed?

As our applications’ data grows, so usually does the memory required by SQL Server to efficiently process requests for that data. Sometimes those requirements are more than the host operating system instance can handle, and we don’t find out about it until it’s too late and performance takes a nosedive. In this post, we’ll explore what memory over-commit is and why it’s a bad thing, how to mitigate the problem, and how to help prevent it from occurring in the first place.

Cause

It’s pretty obvious that memory over-commit occurs when the amount of memory required by applications exceeds the amount of physical memory available in the host operating system. (This applies equally to both physical and virtual machines. In this post, when I say “host operating system,” I mean an operating system instance that hosts SQL Server, not an operating system instance that hosts virtual machines.)

When the amount of memory required exceeds the amount of physical memory available, Windows uses disk (the page file) as a persistent store to satisfy the excess memory requirements. This is why this mechanism is called Virtual Memory — it looks like normal memory to an application, but really Windows is backing it with disk.

How does this happen? Well, first, you’ll notice that I haven’t mentioned anything directly about SQL Server. Virtual Memory is a mechanism of Windows, and so it applies to all applications that run under Windows, including SQL Server. In fact, it’s possible the system is over-committed because of memory requirements from applications other than SQL Server. Usually, though, SQL Server is the largest consumer of memory in a Windows instance, so it’s also usually responsible for causing over-commit problems.

The heart of the issue is controlling the amount of memory SQL Server is allowed to allocate. The only real influence we have over this is the Max Server Memory setting. While that might sound really concrete, the problem is that it… doesn’t actually control the total amount of memory SQL Server can allocate. On SQL Server 2005 to 2008 R2, this setting controls the maximum amount of memory used for the buffer pool only; it doesn’t include other memory pools such as the procedure cache, which can be very significant (gigabytes!) in some scenarios. SQL Server 2012 improves the state of affairs by increasing the scope of what this setting covers. While it’s still not perfect, it’s a welcome improvement to better represent what the setting actually does, and offers greater control of maximum memory utilization. In any event, the point is that this setting underestimates the amount of memory that’s going to be used (sometimes significantly, as mentioned), which can lead to unexpected over-commit.

Symptoms

The performance implications of backing memory with disk can be crippling: disk can be thousands of times slower than physical memory, particularly when it comes to where the Windows page file is landed, as we don’t normally put it on our fastest, most expensive storage device. Probably the worst-case scenario is when the page file is landed on a RAID 1 mirror (the typical physical machine scenario), which simply isn’t meant to handle a huge number of random reads or writes.

In order to detect when memory over-commit is happening, you’ll have to be doing Performance Monitor (PerfMon) logging, as you really won’t see anything directly in SQL Server except that things are very slow (more accurately, the wait time associated with retrieving a page from the buffer pool without physical I/O will be high). I strongly recommend setting up 24/7 PerfMon logging in your environment, and at some point I’ll write a post or record a demo video of how to set it up.

Below are the key PerfMon counters you’ll want to record to detect and troubleshoot memory over-commit. This, of course, is by no means an exhaustive list of all the counters you should be recording.

  • Paging File(_Total)\% Usage – Not surprisingly, this counter can be a dead giveaway to detect if there are issues. If it’s at any value greater than zero, you need to take a closer look at the other counters to determine if there’s a problem. Sometimes a system will be perfectly fine with a value less than 2-3% (it also depends on the size of the page file), but the higher this counter is, the more of a red flag it is. Also, watch this counter to make sure it’s stable, and not creeping up over time.
  • Memory\Available MBytes – If this number is below ~500 (in the absence of page file usage), you’re in the danger zone of over-commit. It’s recommended to keep at least this much memory available not only for unexpected SQL Server usage, but also to cover the case where administrators need to Remote Desktop into the box for some reason. User sessions take memory, so we need to keep some free for emergencies. I won’t get into the amount of memory to keep free on a SQL Server here, as that’s a discussion in itself. My point here is that if this counter is getting too low (less than ~500), you could start getting in trouble soon. I should note also that if the system is currently over-committed, this counter will reflect the amount of virtual memory provisioned, as it gets counted as available memory. So the system could be over-committed, yet appear to have plenty of available memory — look at the other counters to put the number in context.
  • Physical Disk\Disk Reads/sec and Physical Disk\Disk Writes/sec for the disk that has the page file on it – Normal operations do cause some disk activity here, but when memory over-commit happens, these counters will spike up dramatically from the baseline.

Since memory over-commit can only happen when the amount of physical memory is exhausted, the system will only become slow after a certain point. In troubleshooting, sometimes a SQL instance (or Windows itself) is restarted, and it fixes the problem for a while, only to return some time later. By now it should be obvious that this happens because after a restart, the SQL Server buffer pool is empty, and there’s no possibility of over-commit until the physical memory is used up again.

Solution

Iteratively lower SQL Server’s Max Server Memory setting (or initially set it to a reasonable value), and monitor the performance counters until the system falls back to a stable configuration. Because of the nature of Virtual Memory, Windows can hold on to swapped-out pages for quite a long time, so it’s possible that the counters will stabilize with the page file usage at a higher level than normal. That may be okay, as when the pages are swapped back in, they will never be swapped out again, unless the settings on this iteration are still out of whack. If the initial configure was way off (default Max Server Memory setting), you may want to restart the box to start with a clean slate, because the counters will be so far out.

It seems counter-intuitive to lower the amount of memory SQL Server is able to allocate. SQL Server internally manages which sets of pages in memory are hot and cold, an insight Windows doesn’t have. This means that by adjusting the Max Server Memory setting down, even though the amount of memory available to SQL Server will be less, it will still be able to perform well by keeping the most active pages in memory, and only going to physical disk occasionally for pages that aren’t in the buffer pool, as opposed to potentially going to disk for any memory access.

Prevention

While over-commit can never truly be prevented — users could potentially run other applications on the SQL box that require lots of memory — what you can put in place is an early-warning system by monitoring the PerfMon counters. Third-party software solutions should be able to help with this, particularly if you manage many servers.

Speaking of other applications, if you have any installed on the SQL box (including the third-party monitoring software I just mentioned), it’s doubly important to monitor the state of affairs, as these are variables out of your control. The Max Server Memory setting and the amount of available member should be more conservative in this case.

It’s also important, particularly if your SQL Server is version 2005 to 2008 R2, to ensure the Max Server Memory setting is allowing for some future growth in your environment. Because the setting doesn’t encompass the plan cache, even adding an insignificantly-small database could cause over-commit if many different queries are run against it. The setting and counters should be evaluated as part of the change process. For SQL Server 2012, this is less of a concern for the reasons previously mentioned, but it can still be worth checking things out as part of your regular change process.

Finally, try to avoid letting users remote into the SQL box to do regular work or maintenance, as this can use up a tremendous amount of memory. Nearly all tasks can be accomplished remotely using SQL Server Management Studio and remotely/non-interactively using PowerShell. If your administrators’ workstations aren’t in the same domain as your servers, create a management box on the server domain, and remote into that instead to manage the servers.

Startup Expression Predicates

When we write T-SQL statements, what we’re really doing is describing what data to return. It’s then up to the internals of SQL Server to best decide how to most efficiently return the data we asked for.

Sometimes, there’s extra information we know about, but that SQL Server doesn’t (automatically). Letting SQL Server in on this seemingly redundant information can change how efficiently the data is accessed and returned.

In this post, we’ll walk through a simple parent/child example that exploits a partially denormalized table schema to improve join performance to the child tables. The performance improvement comes through SQL Server producing query plans that contain Startup Expression Predicates, which effectively prevents certain parts of the query plan from executing in some cases.

Test Setup

The first thing we need to do is set up the tables. We’ll need a ProductTypes table, a parent table (Products) and two child tables (ItemProducts and ServiceProducts).

CREATE TABLE [dbo].[ProductTypes]
(
	Id tinyint NOT NULL PRIMARY KEY,
	Description varchar(50) NOT NULL
);

CREATE TABLE [dbo].[Products]
(
	Id int NOT NULL PRIMARY KEY,
	ProductTypeId tinyint NOT NULL
		FOREIGN KEY REFERENCES [dbo].[ProductTypes](Id),
);

CREATE TABLE [dbo].[ItemProducts]
(
	ProductId int NOT NULL PRIMARY KEY
		FOREIGN KEY REFERENCES [dbo].[Products](Id),

	ItemColumn int NOT NULL
);

CREATE TABLE [dbo].[ServiceProducts]
(
	ProductId int NOT NULL PRIMARY KEY
		FOREIGN KEY REFERENCES [dbo].[Products](Id),

	ServiceColumn int NOT NULL
);

In this type of design, there will only ever be a single row in one of the child tables for each row in the parent table. This is typically handled by some form of business logic (stored procedures or views) and enforced by constraints, but I want to keep this example simple, so I’m only mentioning this for the sake of completeness, and what the data is going to “look” like.

Okay, let’s add some test data so we can run some queries:

INSERT INTO [dbo].[ProductTypes](Id, Description)
	VALUES
		(1, 'Item'),
		(2, 'Service');
		
INSERT INTO [dbo].[Products](Id, ProductTypeId)
	VALUES
		(1, 1),
		(2, 2);
		
INSERT INTO [dbo].[ItemProducts](ProductId, ItemColumn)
	VALUES (1, 50);
	
INSERT INTO [dbo].[ServiceProducts](ProductId, ServiceColumn)
	VALUES (2, 40);

Now we have rows representing one ItemProduct, and one ServiceProduct.

Querying the Data

First let’s start by looking at a typical query that might be run against these tables:

SELECT
	p.Id AS ProductId,
	p.ProductTypeId,
	COALESCE(ip.ItemColumn, sp.ServiceColumn) AS OtherColumn
	FROM [dbo].[Products] p
	LEFT OUTER JOIN [dbo].[ItemProducts] ip WITH(FORCESEEK) ON
		ip.ProductId = p.Id
	LEFT OUTER JOIN [dbo].[ServiceProducts] sp WITH(FORCESEEK) ON
		sp.ProductId = p.Id;

(Note: the hints are not standard, but are needed for demonstration purposes; I got a nested loops/table scan plan by default. See the final section of this post for some extra discussion.)

Since each product row will only exist in one of the child tables, we have to use LEFT joins to get any results. The query plan looks like this (click for full size):

Startup Expression Predicates 1

We can see that for each row in the Products table, SQL Server must join to both child tables in case there are rows there. Legitimately there could be, as the only thing preventing that is our business logic. SQL Server doesn’t understand that, so it has no choice but to ensure correctness and do the extra work.

Here’s where the magic comes in. We know that for a given ProductTypeId, rows will only exist in one of the child tables. If SQL Server knew that, then it would only have to join to one child table for each row in Products.

Let’s try this query:

SELECT
	p.Id AS ProductId,
	p.ProductTypeId,
	COALESCE(ip.ItemColumn, sp.ServiceColumn) AS OtherColumn
	FROM [dbo].[Products] p
	LEFT OUTER JOIN [dbo].[ItemProducts] ip WITH(FORCESEEK) ON
		(ip.ProductId = p.Id) AND
		(p.ProductTypeId = 1) /*****/
	LEFT OUTER JOIN [dbo].[ServiceProducts] sp WITH(FORCESEEK) ON
		(sp.ProductId = p.Id) AND
		(p.ProductTypeId = 2) /*****/

Now we’re telling SQL Server something about our business logic. Let’s see if this improves the execution plan:

Startup Expression Predicates 2

That’s better. SQL Server has added two Filter operators — one for each child table — that reject rows that don’t satisfy the Startup Expression Predicate (in other words, the extra business logic we told SQL Server). This results in only a single seek against the proper child table for each row in the Products table. This could provide a big performance boost: for the number of child tables (m) and the number of parent rows (n), this approach will always execute only n seeks (thus making the number of seeks independent of the number of child tables), instead of m*n as the first approach does. This does of course come at the penalty of storage to denormalize enough information (ProductTypeId in this case) to drive the process, but usually that’s not going to be a huge hit (most likely 1 byte per row in the parent table).

As a bonus, here’s a different approach to writing the same query. This form may be more appropriate for some things, depending on what you’re trying to do:

SELECT
	p.Id AS ProductId,
	p.ProductTypeId,
	a.OtherColumn
	FROM [dbo].[Products] p
	CROSS APPLY
	(
		SELECT
			ItemColumn AS OtherColumn
			FROM [dbo].[ItemProducts] ip
			WHERE
				(ip.ProductId = p.Id) AND
				(p.ProductTypeId = 1) /*****/
				
		UNION ALL
		
		SELECT
			ServiceColumn
			FROM [dbo].[ServiceProducts] sp
			WHERE
				(sp.ProductId = p.Id) AND
				(p.ProductTypeId = 2) /*****/
	) a;

And here is the resulting query plan that contains the Startup Expression Predicate Filter operators:

Startup Expression Predicates 3

Conclusion

Sometimes giving SQL Server more information than you might think is necessary can help to improve the query plans that are generated. Certainly in cases like this parent/child example, we were able to exploit a denormalized ProductTypeId column to drive the index seeks to the child tables, and make the query scale much better. The result in this case was that the total number of seeks against the child tables became independent of the number of child tables, while still retaining the original query logic. Look for opportunities like this in your queries to give SQL Server extra hints about your table schema — you can be rewarded with more scalable queries.

 

More?

As I was playing around with these examples, in particular the second query, I found it interesting that for some reason if the plan used a scan operator as the lower input of the nested loops join (such as I got by not using the FORCESEEK hints), there were no startup expression predicates to be found (nor Filter operators). Instead, the predicate end up on the nested loops operator itself, with each child table scanned for every upper input row. This is somewhat puzzling, as I can’t think of a reason why the lower input couldn’t be protected by a startup expression in that scenario as well. (Note: I only tested on a 2008 R2 RTM instance.)

Tuning Diagnostic Queries

Most of the time when we talk about tuning, it’s related to improving the performance of a production workload. Maybe (hopefully) we’ve gone as far as tuning more “hidden” code like stored procedures, triggers, and the black boxes that are user-defined functions. But what about the other queries we run against our production servers?

Maybe we use 3rd-party management/monitoring software, or in-built tools like Activity Monitor or Management Data Warehouse (aka Data Collector) that periodically checks up on our servers. In these cases, if we identify these systems as a performance problem, we’re pretty much stuck with either reducing the polling frequency, abandoning the use of the system entirely, or pestering the vendor until the problem is fixed. That’s pretty straight-forward.

There’s another class of queries we run against our production servers, though: diagnostic queries. If a server is in performance trouble, or we need to go troubleshooting for blocking, or even if we’re proactively searching for issues, we automatically whip out our trusty folder of scripts and start running queries to start to narrow things down; queries that could potentially make the problem worse by either adding extra load to the server, or even adding unnecessary time to the troubleshooting process.

It’s impossible, of course, to eliminate the need for the database engine to do some processing to return the results of our queries (the observer effect), but how much impact are our queries having on the server? Maybe in the grand scheme of things, not a lot. During a troubleshooting session, however, things are very different: people are anxious for answers, the problem needs to be solved now, and we’re working as hard as we can to not only fix the problem, but above all else, to not make things worse.

Now, I’m going to give an example, and I have to apologize in advance to Paul Randal and/or Glenn Barry because I’m about to single them out. Sorry, guys.

A good while back, Paul posted an excellent diagnostic query to analyze wait stats (note: I never saw Glenn’s original query), and I had stolen borrowed used it to troubleshoot some issues in the environment I work in. Of course, I played around with it before that, to see it was indeed a useful script to keep around. But when I ran it against the production server in peril, I was floored because it took so long to return the results I needed to see. I was also puzzled that after I reset the wait stats and then ran the query a few seconds later, the results… just didn’t look right for what I was expecting. What was going on? It was just a relatively simple SELECT statement from a DMV, which is usually pretty quick. I turned on STATISTICS IO and STATISTICS TIME and ran the query again:

SQL Server parse and compile time: 
   CPU time = 0 ms, elapsed time = 0 ms.

(16 row(s) affected)
Table 'Worktable'. Scan count 7, logical reads 4664, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 812 ms,  elapsed time = 3469 ms.

Shock and/or awe. I think we found the problem! The diagnostic query was consuming a tremendous amount of resources just to tell us that the system was having its resources consumed tremendously.

Well we’ve got to do something about that, haven’t we? Ultimately the query got pulled apart, and using a pre-computation approach involving table and scalar variables, this is where it ended up:

SQL Server parse and compile time: 
   CPU time = 16 ms, elapsed time = 22 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.
Table '#58739F6F'. Scan count 0, logical reads 92, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 1, logical reads 192, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 15 ms,  elapsed time = 3 ms.
Table '#58739F6F'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

(16 row(s) affected)
Table '#58739F6F'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 1 ms.

Much better. (You can find the modified script on the Scripts & Code Resources Page. Note that it doesn’t do exactly the same thing as the original.)

So what can we learn from this?

  • While DMVs are usually fast, doing further analysis on the results can be expensive, depending on what we do, of course.
  • Test diagnostic queries for performance just as much as for functionality.
  • Fully test diagnostic queries long before they get thrust into the heat of battle. This is sword-sharpening for DBAs.
  • For the scripts that just… perform poorly… and there’s nothing more we can do about it, add comments to the script body that say as much. Whoever is about to run the script should be made aware that the query will take some time to finish. The warning will prevent situations like worrying about whether or not it’s working, or it even may alert the person to not run the script at all and use a different/similar method instead.

How much overhead do CHECK constraints have?

When designing new database structures, I feel it’s really important to make sure that the data goes into the new tables as cleanly as possible. Doing this early on helps prevent bugs from creeping into the system as it’s developed. And preventing bugs as early as possible in the development cycle is almost certainly the most cost-efficient strategy — cleaning up bad data later on is extremely expensive, time consuming, and frustrating.

One of the tools that can be used to reject bad data values going into tables are CHECK constraints. This type of constraint is meant to be used to validate a single row based on a predicate that’s a function of any combination of the table columns in that row. (CHECK constraints can be used for things beyond this definition… but doesn’t mean they should be.)

In this post, I’m not going to address any more of the advantages of constraining data values. What I do want to investigate is performance.

The extra logic to do this validation necessarily has additional processing cost associated with it. The question is: how much extra? Does it really make a difference? What kind of general guidelines can be established for performance purposes?

The Setup

I decided to run two series of tests: one with a very narrow table (3 columns), and one with a wide table (21 columns). We’ll INSERT a bunch of test data into each of the tables. A baseline would be established for each test, where the tables have no constraints. Then, different constraints would be added, and hopefully we’ll be able to see a difference in the load times.

Here is an abridged version of the table creation script (note: all scripts are available to download in full at the bottom of this post):

-- Narrow, no constraints
CREATE TABLE [dbo].[NarrowWithoutConstraints]
(
	Id int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Col1 int NOT NULL,
	Col2 varchar(10) NOT NULL
);

-- Wide, no constraints
CREATE TABLE [dbo].[WideWithoutConstraints]
(
	Id int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Col10 int NOT NULL,
	...
	Col19 int NOT NULL,
	Col20 varchar(10) NOT NULL,
	...
	Col29 varchar(10) NOT NULL,
);

-- Narrow, simple constraints
CREATE TABLE [dbo].[NarrowWithConstraints1]
(
	Id int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Col1 int NOT NULL CHECK (Col1 >= 0),
	Col2 varchar(10) NOT NULL CHECK (LEN(Col2) > 0)
);

-- Wide, simple constraints
CREATE TABLE [dbo].[WideWithConstraints1]
(
	Id int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Col10 int NOT NULL CHECK (Col10 >= 0),
	...
	Col19 int NOT NULL CHECK (Col19 >= 0),
	Col20 varchar(10) NOT NULL CHECK (LEN(Col20) > 0),
	...
	Col29 varchar(10) NOT NULL CHECK (LEN(Col29) > 0)
);

-- Narrow, complex constraints
CREATE TABLE [dbo].[NarrowWithConstraints2]
(
	Id int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Col1 int NOT NULL
		CHECK ((Col1 >= 0) AND (Col1 > -9999) AND (Col1 < 5000) AND (Col1 < 9999)),
	Col2 varchar(10) NOT NULL
		CHECK ((LEN(Col2) > 0) AND (DATALENGTH(Col2) = DATALENGTH(LTRIM(RTRIM(Col2)))))
);

-- Wide, complex constraints
CREATE TABLE [dbo].[WideWithConstraints2]
(
	Id int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Col10 int NOT NULL
		CHECK ((Col10 >= 0) AND (Col10 > -9999) AND (Col10 < 5000) AND (Col10 < 9999)),
	...
	Col19 int NOT NULL
		CHECK ((Col19 >= 0) AND (Col19 > -9999) AND (Col19 < 5000) AND (Col19 < 9999)),
	Col20 varchar(10) NOT NULL
		CHECK ((LEN(Col20) > 0) AND (DATALENGTH(Col20) = DATALENGTH(LTRIM(RTRIM(Col20))))),
	...
	Col29 varchar(10) NOT NULL
		CHECK ((LEN(Col29) > 0) AND (DATALENGTH(Col29) = DATALENGTH(LTRIM(RTRIM(Col29))))),
);

-- Narrow, many complex constraints
CREATE TABLE [dbo].[NarrowWithConstraints3]
(
	Id int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Col1 int NOT NULL
		CHECK ((Col1 >= 0) AND (Col1 > -9999) AND (Col1 < 5000) AND (Col1 < 9999)),
	Col2 varchar(10) NOT NULL
		CHECK ((LEN(Col2) > 0) AND (DATALENGTH(Col2) = DATALENGTH(LTRIM(RTRIM(Col2)))))
);

ALTER TABLE [dbo].[NarrowWithConstraints3]
	ADD CHECK ((Col1 > -1) AND (Col1 > -9998) AND (Col1 < 5001) AND (Col1 < 9998));

...

ALTER TABLE [dbo].[NarrowWithConstraints3]
	ADD CHECK ((Col1 > -9) AND (Col1 > -9990) AND (Col1 < 5009) AND (Col1 < 9990));

ALTER TABLE [dbo].[NarrowWithConstraints3]
	ADD CHECK ((LEN(Col2) > -1) AND ((DATALENGTH(Col2) + 1) = (DATALENGTH(LTRIM(RTRIM(Col2))) + 1)));

...

ALTER TABLE [dbo].[NarrowWithConstraints3]
	ADD CHECK ((LEN(Col2) > -9) AND ((DATALENGTH(Col2) + 9) = (DATALENGTH(LTRIM(RTRIM(Col2))) + 9)));

The Test

Testing these scenarios is pretty simple: shove a bunch of data into a table, and time how long it takes. One of the test harnesses is shown below:

DECLARE @startTime datetime2;
DECLARE @endTime datetime2;
DECLARE @i int = 1250;

SET NOCOUNT ON;

TRUNCATE TABLE [dbo].[<table>];
DBCC CHECKIDENT('dbo.<table>', RESEED, 1);

SET @startTime = SYSDATETIME();

WHILE @i >= 0
BEGIN

	INSERT INTO [dbo].[<table>](Col1, Col2)
		SELECT
			number,
			CAST(number AS varchar(10))
			FROM master..spt_values v
			WHERE v.type = 'P'

	SET @i -= 1;

END

SET @endTime = SYSDATETIME();

SELECT DATEDIFF(MILLISECOND, @startTime, @endTime);

So that’s pretty standard. I ran the narrow table test with @i = 1250, and the wide table test with @i = 750, which resulted in 2,560,000 and 1,536,000 rows inserted, respectively. For each table, I ran the test harness 11 times; the first result was discarded, and the rest got recorded and averaged together. Note that the table is TRUNCATEd and the identity seed reset before each run. Also, the database was in SIMPLE recovery.

The Results

My hypothesis was that the constraints (at least the simple ones) would add a little bit of overhead, but I wasn’t sure if it would be measurable, and it was unclear how much (if anything) SQL Server would do to simplify the expressions I used (turns out it didn’t simplify them at all).

Average (ms) ms/1000 Rows % Increase
Narrow (no constraints) 12,401 0.484
Narrow (simple constraints) 13,034 0.509 5.1%
Narrow (complex constraints) 14,505 0.567 17.0%
Narrow (many complex constraints) 25,101 0.981 102.4%

There are a couple of very interesting things here. First of all, the difference between no constraints and a small number of simple constraints is measurable. Second — this is what really surprised me — a complex constraint is significantly more expensive than the simple constraint. This makes sense because functions are always going to be more expensive than something like a basic comparison. It seems that perhaps this difference is magnified in this case because the amount of work to do the actual INSERT is small relative to the total amount of work needed to be done. That said, it appears that many complex constraints could at the very least hobble an INSERT-only workload.

Average (ms) ms/1000 Rows % Increase
Wide (no constraints) 12,477 0.812
Wide (simple constraints) 14,647 0.954 17.4%
Wide (complex constraints) 20,238 1.318 62.2%

In this test, we would expect that the differences are larger because of the number of constraints used. As a sanity check, we can see that the ratio of the increases (62.2% / 17.4% = 3.6) vs (17.0% / 5.1% = 3.3) is approximately the same.

If we compare the narrow table with many complex constraints with the wide table with complex constraints, they both have the same number and type of constraints, and the ms/1000 rows numbers have the same sort of increases (0.981 – 0.484 = 0.497) vs (1.318 – 0.812 = 0.506), respectively. This says that evaluating constraints has a fixed cost, independent of the table schema.

Conclusion

I certainly learned a lot from doing this experiment. There are a few takeaways from the results:

  • Disabling CHECK constraints will reduce the load time, perhaps significantly.
  • A CHECK constraint’s overhead is determined by it’s complexity of evaluation (comparisons good; function calls not so much).
  • Evaluating CHECK constraints has a fixed cost, and thus a smaller relative cost when the other parts of the process are more expensive.

While these aren’t Earth-shattering conclusions, we did get to validate the best-practice of disabling constraints for loading, and see just how much overhead CHECK constraints can add.

What I want to stress here is that this test was for a relatively narrow situation involving only INSERTs. If your workload is 100% INSERTs, then you definitely need to pay attention to these kinds of things, but if your system isn’t INSERTing 200k rows/sec — or even anywhere near that — it’s likely that the benefits of the constraints will outweigh the performance penalty, as the overhead will be a small portion of the overall workload.

Full scripts and test results:

01 – Create Tables.sql

02 – Narrow Table Test Harness.sql

03 – Wide Table Test Harness.sql

Test Results.xlsx

XML Schema Collections: the XML Accelerator Pedal

While some of us may disagree, I think XML is a great method of moving small-to-medium sized data sets into SQL Server.

Because of XML’s generality, it can be used by pretty much any client application and technology you can think of, and this can be a big advantage to getting the job done quickly and flexibly. I personally like it most for passing in sets of parameters (i.e., multiple ids) into stored procedures, and sending in full sets of data values where it’s much easier to consolidate them on the SQL side rather than using complicated application logic.

The downside is that the XML has to be shredded (i.e. parsed) on the SQL Server side, which uses CPU. Out of all the tiers in our application stack, SQL Server is the most expensive to scale up, so it’s important to be mindful when putting operations with large processing overhead into production. Ideally, we’d like to use XML with as little overhead as possible.

When we tell SQL Server to shred an XML document into a rowset, it can’t make assumptions about what the incoming XML is going to look like. In fact, by default, SQL Server assumes there could be multiple root elements in the document (i.e., it’s an XML fragment)!

By using XML Schema Collections, we tell SQL Server what to expect, and the optimizer is able to tailor the query plan to only include the necessary physical operations. A great side-effect of this is that the incoming XML is automatically validated against the schema, essentially making the input “strongly typed” like we would with regular variables.

Let’s walk through an example to demonstrate just how much of a difference this can make. (Note: XML Schema Collections are available in 2005+, but this example code is written for 2008+.)

First, I’ll create a table into which I’ll dump the shredded data:

CREATE TABLE [dbo].[Table1]
(
	Id int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
	Value1 int NOT NULL,
	Value2 nvarchar(50) NOT NULL
);

Next, I’ll create the XML Schema Collection for comparison testing. You can see all the XML Schema Collections in a database through Management Studio in the Programmability | Types | XML Schema Collections folder.

CREATE XML SCHEMA COLLECTION [dbo].[Table1Rows] AS
N'<?xml version="1.0"?>
<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
	<xs:element name="Table1Rows">
		<xs:complexType>
			<xs:sequence>
				<xs:element maxOccurs="unbounded" name="Table1Row">
					<xs:complexType>
						<xs:sequence>
							<xs:element name="Value1" type="xs:integer" minOccurs="1" maxOccurs="1" />
							<xs:element name="Value2" type="xs:string" minOccurs="1" maxOccurs="1" />
						</xs:sequence>
					</xs:complexType>
				</xs:element>
			</xs:sequence>
		</xs:complexType>
	</xs:element>
</xs:schema>';

Finally, here is the meat of the test script, which builds up an XML document, and then shred/INSERTs it into the table. Comparison test by uncommenting the (Table1Rows) part of the XML variable declaration on line 15.

DECLARE @sourceString nvarchar(MAX) =
	CONVERT
	(
		nvarchar(MAX),
		(
			SELECT TOP 100
				CAST(v.number AS nvarchar(4)) AS Value1,
				REPLICATE('a', CAST((v.number / 50) AS int)) AS Value2
				FROM master..spt_values v
				WHERE v.type = 'P'
				FOR XML PATH('Table1Row'), ROOT('Table1Rows')
		)
	);

DECLARE @sourceXml xml/*(Table1Rows)*/ = @sourceString;

DECLARE @i int = 100;
DECLARE @startTime datetime2;
DECLARE @endTime datetime2;

SET NOCOUNT ON;

TRUNCATE TABLE [dbo].[Table1];

SET @startTime = SYSDATETIME();

WHILE @i >= 0
BEGIN

	INSERT INTO [dbo].[Table1](Value1, Value2)
		SELECT
			n.x.value(N'Value1[1]', 'int'),
			n.x.value(N'Value2[1]', 'nvarchar(50)')
			FROM @sourceXml.nodes(N'Table1Rows[1]/Table1Row') n(x);
			
	SET @i -= 1;
	
END

SET @endTime = SYSDATETIME();

SELECT DATEDIFF(MILLISECOND, @startTime, @endTime);

While the results on your machine will no doubt vary versus mine, my test on a 2008 R2 instance took about 3,100 milliseconds without using the XML Schema Collection, and about 280 ms with. This is a huge improvement! And we can see the difference in the query plans.

Before:

XSD Before

After:

XSD After

When I started using XML Schema Collections, I was only using them to strongly-type my XML, but it turns out that shredding XML can see a significant performance improvement as a side-effect. I read that strongly-typing XML slows down writes to XML variables, but I have yet to encounter a situation where that was necessary, or indeed, a good idea.

If you do start using XML Schema Collections in your own applications, I have a slight caution: you may want to avoid strongly typing parameters that are exposed in the database API (i.e., stored procedure parameters), even though it’s technically correct to construct the parameters that way. The reason why is that XML Schema Collections, once created, cannot be altered in-place (there is an ALTER XML SCHEMA COLLECTION, but it doesn’t do what you might expect). To change the definition, a collection must be dropped and recreated, which means that all objects with strong dependencies must be dropped and recreated as well. I think this is a royal pain — feel free to vote up the Connect feature request to add the expected functionality.

A workaround is to weakly type the API parameters, but immediately cast the parameter to a strongly-typed local variable in the procedure body, like so:

CREATE PROCEDURE [dbo].[Proc1](@p1 xml)
AS
BEGIN

	DECLARE @realP1 xml(MySchemaCollection) = @p1;
	
	...

 
From an API point of view, that’s not the best solution as the strong type isn’t exposed to the outside, but IMO, it’s a good enough tradeoff from a maintenance point of view, particularly if the Schema Collection is reused in many places.