Allowing remote connections to SQL Express (the right way) (video)

Sometimes we need to allow remote connections to a SQL Server Express or Developer edition instance, usually on our local machine. In this video demo, I show you how to allow remote access without resorting to disabling the Windows Firewall entirely, which is an unfortunately-common solution pervading the internet.



Since there are many possible configurations (which is why I say "default" a lot in the video -- sorry), and part of what I covered applies to all editions of SQL Server, a much more complete reference can be found here on MSDN.


Efficiently indexing long character strings

Sometimes we face the requirement to search a table for a specific string value in a column, where the values are very long. The default answer is to create an index, but we must remember that index keys cannot exceed a 900-byte size limit, and LOB columns (which include the MAX type variants) cannot be in the index key at all. In this post, we'll explore a method to efficiently index long character strings where the search operation is moderately-to-highly selective.


The Methodology

Since we're faced with a size restriction on the index key (or the string column cannot be included in the key at all), we need to apply some kind of transformation to the string data to make its representation -- and hence the index key size required -- smaller.

Of course, we can't simply compress an infinite amount of data into a comparatively very small field, so the index we create won't be able to support all the same operations as if we had indexed the original field. But most of the time, that isn't a problem -- generally, we need to search for an exact match, or a string starting (or maybe even ending) with a given few characters.

We're going to do this by performing a computation on the original values, storing the result in a new column in the table, and creating an index on that column (there are two ways to do this, which we'll get into shortly). In terms of the computation itself, there are a few different options that can be used, depending on what needs to be accomplished. I'll introduce two basic approaches here, and then we'll run through some code that demonstrates one of them.

  1. Use the LEFT function to grab the first several characters and discard the rest. This method is appropriate for both exact matching and wildcard searching with a few given characters (i.e., LIKE 'abc%'), and because the original text stays in tact, it supports case/accent-insensitive matching without doing anything special. The drawback of this method, though, is that it's really important to know the data. Why? To improve the efficiency of queries, the number of characters to use should be enough such that the computed values are more or less unique (we'll see why in the example). It may in fact be the case that the original field always has a consistent set of characters at the beginning of the string (maybe they're RTF documents or something), so most of the computed values would end up being duplicates (if that's the case, this approach won't work and you'll want to use the other approach instead).
  2. Use a traditional hash function (such as HASHBYTES) to compute a numeric representation of the original field. This method supports exact matching (case-insensitivity with a small tweak, and accent-insensitivity with a bit of code), but not partial matching because the original text is lost, and there's no correlation between the hash value and the original text. It's also critical to watch the data types involved here, because hash functions operate on bytes, not characters. This means hashing an nvarchar(MAX) value N'abc' will not give the same result as hashing a varchar value 'abc', because the former is always 2 bytes per character. Those are the downsides. The upside is that traditional hashing has the ability to work on data that contains very few differences, and when the text starts with the same sequence of characters.

Both of these methods are hash functions, as they convert a variable length input to a fixed length output. I suppose the first approach isn't technically a hash function because the input could be shorter than the number of characters desired in the output, but I think you see my point that both approaches "sample" the original data in a way that allows us to represent the original value more or less uniquely without needing to carry around the original data itself. Maybe it's more correct to speak of them as lossy compression algorithms. In any event, I will refer to the results of these functions as hash values for the remainder of the post.

Because this method omits some (or probably most) of the original data, some extra complexity in our code must be introduced to compensate for the possibility of duplicate hash values (called a hash collision). This is more likely to happen using the first approach as the original data values are partially exposed. It's still possible, however rare, that the second approach can have duplicates, too, even if the input text is unique, so we must accommodate for that.

Speaking specifically about using a traditional hash function, such as HASHBYTES, there are a couple major things to consider. First, there are several different algorithms to choose from. Because we aren't going to be using the hash value for any type of cryptographic functionality (i.e., we don't care about how difficult it will be to "unhash" the values), it's largely irrelevant which algorithm we choose, except to pick one that produces a hash value of an acceptable size. A larger hash value size will reduce the possibility of collisions as there are more possible combinations, but as I already mentioned, we have to handle for collisions anyway because no hash function is perfect; therefore, it's okay to use a smaller hash value size to save storage space, and pick a hash algorithm that is least expensive to compute initially.1

Second, we need to make sure that the hash values represent a consistent portion of the string input values. HASHBYTES is great because it's built-in, but its input parameter is limited to 8000 bytes. This means if the string values needed to be hashed are longer than 8000 char/4000 nchar, a slightly different approach should be taken. I would recommend creating a CLR scalar function, as there are no limitations of input size in the .NET hashing classes.


The Test

As I mentioned previously, there are two ways to store the hash values so they can be indexed:

  1. Computed column. This method has two advantages: first, the SQL Server engine automatically keeps the hash value up-to-date if the source column changes. Second, if we use a non-PERSISTED column, we save the storage space for the hash values in the base table. The main disadvantage is that we may have to poke and prod our SQL statements to get the plan shape we want; this is a limitation of computed columns in general, not of what we're doing here specifically. Note: if your goal is to create hash values in two tables and JOIN them together on that column, it may not be possible to do it efficiently using computed columns because of bizarre optimizer behaviour. Any time computed columns are involved, always inspect the execution plan to make sure SQL Server is doing what's expected -- sometimes common sense doesn't make it into the execution plan that gets run.
  2. Base table column, updated by triggers (if required). This method works best to get execution plan shapes that we want, at the expense of a bit of complexity to keep the values updated, and the additional storage space required in the base table.

In the example below, we'll use the base table column method, as it's the most flexible, and I won't have to go into detail about hinting in the SQL statements.

Let's start by creating a table to play with:

CREATE TABLE [dbo].[Table1]
	StringCol varchar(MAX) NOT NULL

And then we'll populate it with some test data (note: the total size of the table in this demo will be about 70 MB):


INSERT INTO [dbo].[Table1](StringCol)
		REPLICATE('The quick brown fox jumps over the lazy dog. ', 5) +
			CONVERT(varchar(MAX), NEWID())
		FROM master..spt_values v
		WHERE v.type = 'P';
GO 64
INSERT INTO [dbo].[Table1](StringCol)
	VALUES ('The quick brown fox jumps over the lazy dog.');
GO 3

So we've got about 130,000 random-type strings, and then 3 rows which we can predict. Our goal is to efficiently find those 3 rows in the test queries.

Since the string field was defined as varchar(MAX), we can't create an index on that column at all, and the only index we have in the base table is the clustered index on the Id column. If we try a naive search at this point, we get a table scan, which is the worst case scenario:

DECLARE @search varchar(MAX) =
	'The quick brown fox jumps over the lazy dog.';

SELECT Id, StringCol
	FROM [dbo].[Table1]
	WHERE StringCol = @search;


We get our 3 rows back, but this is far from ideal. I'm not sure why the optimizer chooses a scan and filter instead of pushing down the predicate into the scan operator itself, but that's beside the point because this is an awful plan, with an estimated cost of over 3.5.

Let's move on and create our hash value column and populate it. Note that if you want to update the hash values using triggers and keep the column NOT NULL, you'll need to add a default constraint so INSERTs don't fail outright before the trigger code gets a chance to run.

ALTER TABLE [dbo].[Table1]
	ADD StringHash binary(16) NULL;
UPDATE [dbo].[Table1]
	SET StringHash = CONVERT(binary(16), HASHBYTES('MD5', StringCol));

ALTER TABLE [dbo].[Table1]
	ALTER COLUMN StringHash binary(16) NOT NULL;

We have our hash values in the table, but still no index to improve our query. Let's create the index now:

	ON [dbo].[Table1](StringHash);

While this looks unremarkable, the index has been created non-unique on purpose -- first and foremost to accommodate duplicate source (and hence hash) values, but also to allow for the possibility of hash collisions. Even if your source values are guaranteed unique, this index should be non-unique. Other than that, this should be enough to let us construct a query to make the search more efficient:

DECLARE @search varchar(MAX) =
	'The quick brown fox jumps over the lazy dog.';

SELECT Id, StringCol
	FROM [dbo].[Table1]
		(StringHash = CONVERT(binary(16), HASHBYTES('MD5', @search))) AND
		(StringCol = @search);

You'll notice I've still included the StringCol = @search predicate in the WHERE clause -- this is to ensure correct results of the query due to hash collisions. If all we did was compare the hash values, we could end up with extra rows in the results. Here's the execution plan of the query above:


We got an index seek, which was the main thing we were looking for. The Key Lookup is expected here, because we have to compare the original values as well, and those can only come from the base table. You can now see why I said this method only works for moderately-to-highly selective queries, because the key lookup is required, and if too many rows are being selected, these random operations can kill performance (or the optimizer may revert to a table scan). In any event, we now have an optimal query, and even with the key lookups happening, the estimated cost was 0.0066, an improvement by over 500x on this smallish table.



I'm sure at this point you can think of several other uses for this methodology; it can be applied to large binary fields, too. You could even implement it for cases where the column could be indexed directly (say, varchar(500), with most of the values taking up nearly all the allowed space), but maybe you don't want to -- hashing the data might save a significant amount of storage space, of course at the expense of code complexity and a bit of query execution time.

While hashing data is nothing new, this technique exploits hashing to greatly improve the efficiency of data comparisons. I'll leave it as an exercise for the reader to try the JOINing scenario I mentioned above -- if you do, try it using computed columns first for maximum learning -- you'll find that this technique also improves the queries, as long as the join cardinality remains low (like in the 1-table scenario, to control the number of key lookups).


1 While I haven't tested this in SQL Server either using HASHBYTES or a CLR function, there are implementations of certain hash algorithms in processor hardware itself. This may be of benefit to reduce CPU usage, possibly at the expense of storage depending on the algorithms in question.


Branching Out (How to be a More Effective DBA)

It's often the case for a technical analyst to have occasion to reach beyond their original job description to solve problems. Perhaps, though, no more so than a database administrator, who is tasked with managing what is often the focal point of a business -- data -- a point where every technical discipline can converge.


I suppose it would be helpful to take a step back and ask the question: what does a database administrator do?

There are as many answers to that question as there are database administrators. You've probably had to think about answering that question for a layperson before, and come to the realization that there just isn't a general answer. You probably ended up answering the question "what do I do as a database administrator?" to avoid giving some sort of cop-out answer like "manage databases" after having a puzzled look on your face for several seconds.

Maybe all you do is make sure all the data is backed up and recoverable, and respond to alerts when those processes are in trouble. Maybe all you do is analyze traces to make performance improvements. Maybe all you do is write report queries for the people wearing suits.

More typically, though, you do many different things at different times -- or all at once -- to solve a problem. Analyze a performance trace to determine why a report query is slow, and find out that while the code could be improved, there's a latency issue with the storage network. That sounds a lot more like it. Sometimes the lines drawn between technical disciplines become so blurry they disappear altogether.


Everyone has their own set of likes and dislikes, and of course that isn't limited only to your career. It's a set that's constantly changing -- sometimes gradually, sometimes rapidly -- as you experience things for yourself.

In the field of information technology, there are so many different areas to explore, even when only considering what's involved with database administration specifically. Programming, hardware, storage, networking, and more -- the expanse of knowledge is vast and growing every single day, and so it's becoming more and more difficult (if not impossible) to be an expert in all of them.

Practically speaking, I think abandoning the idea of becoming that all-knowing expert is the only reasonable approach. That isn't to say that goals should be abandoned as well; my suggestion is simply to focus, or re-focus. Concentrate on the one or two areas where your natural aptitude and positive attitude intersect, and hold an interest in many of the other major areas, even if you don't consider some of them to be particularly interesting.

While that sounds a bit like punishment, the broader your exposure is to different subjects, the better able you'll be able to not only solve a wider variety of problems, but you'll have real communication ability with everyone in your IT department, which is truly valuable.

Conversations about VLANs and ORMs take on real, practical meaning as they affect the database, your primary responsibility. Even a small amount of understanding of just the subject jargon can go a long way.


Those who become database administrators often start in a peripheral field of study, usually either software development or system administration. Like the answer to "what does a database administrator do" varies, so are the paths to be a database administrator.

It's somewhat ironic that people struggle with the question "how do I become a database administrator" (there is no database administrator college program), yet database administration has numerous peripheral technical fields, making it possible to transition from a wider variety of traditional roles.

I think of it more as building a foundation on which to rely, so it should be a very strong foundation. Not necessarily as a fallback (although that's probably a good plan), but more as a deep-level area, as I mentioned. Above all else, though, you have to bring your problem-solving and information synthesizing skills to the table. Peripheral knowledge is far less useful without the ability to practically apply it in some way.


I suppose the point of this post is to encourage you to reach out into some of the peripheral subjects of database administration. It will only help your career.

Make it a habit, too, even if only once/week. A great way to proactively do this is to subscribe to a newsletter such as Database Weekly, which delivers a wide cross-section of database-related topics directly to your inbox.


In the comments, let me know what your experiences are with needing to deal with a wide variety of subjects. Did you transition to database administration through a non-standard technical field?


Row Filter operators vs. Startup Expression operators

In a previous post, I introduced how to use startup expression predicates in T-SQL queries to improve performance. Based on the feedback I got, there was some confusion about what this operator actually does, and why it appears in the query plan as a Filter operator, which is usually seen in other contexts. In this post, I'll explain the differences and similarities of the Row Filter operator (which is seen more typically) and the Startup Expression filter operator.


Comparison By Example

Let's set up a test scenario that can be used to demonstrate and compare the two types of operators (note: the test data is < 1 MB):


	C1 int NOT NULL

	C1 int NOT NULL

INSERT INTO [dbo].[T1](C1)
	SELECT number FROM master..spt_values WHERE type = 'P'; /* 0-2047 */

INSERT INTO [dbo].[T2](C1)
	SELECT number FROM master..spt_values WHERE type = 'P';

GO 10


Now we can try running a couple queries to see these operators in action. Here's the first one, which contains a Row Filter predicate (like the previous post, I'm using hints so you can reproduce the same plans more easily if you try this yourself):

	FROM [dbo].[T1] t1
	LEFT OUTER MERGE JOIN [dbo].[T2] t2 ON t2.Id = t1.Id

And here's the execution plan (click for full size):

As we can see, the query joined the two tables together, and then filtered that set of rows to give the final result.

The Row Filter operator evaluated the predicate against each returned row (the big arrow to the right of the operator), and output only the rows where the predicate evaluated to true (no rows in this case; the small arrow to the left of the operator).


Here's the next query, which uses a Startup Expression predicate (this query isn't logically equivalent to the first one):

	FROM [dbo].[T1] t1
		(t1.C1 = 10) AND
		(t2.Id = t1.Id)

And here's the query plan:


This time, table T1 was scanned (20480 rows), and the Startup Expression filter operator was executed for each of those rows. However, the index seek to table T2 was only executed 10 times. How did that happen?

The Startup Expression filter evaluated the predicate against each request row coming in from the upper input (in this case the T1 table scan), and only propagated the request where the predicate evaluated to true. This is how a Startup Expression operator "protects" or  "guards" operators to its right, so they aren't executed for every request row. While this particular example is contrived, it's this "guarding" that improves performance by only executing the subsequent operator branch the minimum number of times necessary.



Both the Row Filter operator and Startup Expression filter operator evaluate a predicate against rows.

The Row Filter operator applies the predicate to returned rows, returning only the rows that match the predicate, while the Startup Expression filter operator applies the predicate to requested rows, only making further requests when the row matches the predicate.

While both operators perform essentially the same work (hence they both appear as a Filter operator), they do so logically reversed of each other, and therefore perform very different functions within a query plan.


Is my SQL Server's memory over-committed?

As our applications' data grows, so usually does the memory required by SQL Server to efficiently process requests for that data. Sometimes those requirements are more than the host operating system instance can handle, and we don't find out about it until it's too late and performance takes a nosedive. In this post, we'll explore what memory over-commit is and why it's a bad thing, how to mitigate the problem, and how to help prevent it from occurring in the first place.



It's pretty obvious that memory over-commit occurs when the amount of memory required by applications exceeds the amount of physical memory available in the host operating system. (This applies equally to both physical and virtual machines. In this post, when I say "host operating system," I mean an operating system instance that hosts SQL Server, not an operating system instance that hosts virtual machines.)

When the amount of memory required exceeds the amount of physical memory available, Windows uses disk (the page file) as a persistent store to satisfy the excess memory requirements. This is why this mechanism is called Virtual Memory -- it looks like normal memory to an application, but really Windows is backing it with disk.

How does this happen? Well, first, you'll notice that I haven't mentioned anything directly about SQL Server. Virtual Memory is a mechanism of Windows, and so it applies to all applications that run under Windows, including SQL Server. In fact, it's possible the system is over-committed because of memory requirements from applications other than SQL Server. Usually, though, SQL Server is the largest consumer of memory in a Windows instance, so it's also usually responsible for causing over-commit problems.

The heart of the issue is controlling the amount of memory SQL Server is allowed to allocate. The only real influence we have over this is the Max Server Memory setting. While that might sound really concrete, the problem is that it... doesn't actually control the total amount of memory SQL Server can allocate. On SQL Server 2005 to 2008 R2, this setting controls the maximum amount of memory used for the buffer pool only; it doesn't include other memory pools such as the procedure cache, which can be very significant (gigabytes!) in some scenarios. SQL Server 2012 improves the state of affairs by increasing the scope of what this setting covers. While it's still not perfect, it's a welcome improvement to better represent what the setting actually does, and offers greater control of maximum memory utilization. In any event, the point is that this setting underestimates the amount of memory that's going to be used (sometimes significantly, as mentioned), which can lead to unexpected over-commit.



The performance implications of backing memory with disk can be crippling: disk can be thousands of times slower than physical memory, particularly when it comes to where the Windows page file is landed, as we don't normally put it on our fastest, most expensive storage device. Probably the worst-case scenario is when the page file is landed on a RAID 1 mirror (the typical physical machine scenario), which simply isn't meant to handle a huge number of random reads or writes.

In order to detect when memory over-commit is happening, you'll have to be doing Performance Monitor (PerfMon) logging, as you really won't see anything directly in SQL Server except that things are very slow (more accurately, the wait time associated with retrieving a page from the buffer pool without physical I/O will be high). I strongly recommend setting up 24/7 PerfMon logging in your environment, and at some point I'll write a post or record a demo video of how to set it up.

Below are the key PerfMon counters you'll want to record to detect and troubleshoot memory over-commit. This, of course, is by no means an exhaustive list of all the counters you should be recording.

  • Paging File(_Total)\% Usage - Not surprisingly, this counter can be a dead giveaway to detect if there are issues. If it's at any value greater than zero, you need to take a closer look at the other counters to determine if there's a problem. Sometimes a system will be perfectly fine with a value less than 2-3% (it also depends on the size of the page file), but the higher this counter is, the more of a red flag it is. Also, watch this counter to make sure it's stable, and not creeping up over time.
  • Memory\Available MBytes - If this number is below ~500 (in the absence of page file usage), you're in the danger zone of over-commit. It's recommended to keep at least this much memory available not only for unexpected SQL Server usage, but also to cover the case where administrators need to Remote Desktop into the box for some reason. User sessions take memory, so we need to keep some free for emergencies. I won't get into the amount of memory to keep free on a SQL Server here, as that's a discussion in itself. My point here is that if this counter is getting too low (less than ~500), you could start getting in trouble soon. I should note also that if the system is currently over-committed, this counter will reflect the amount of virtual memory provisioned, as it gets counted as available memory. So the system could be over-committed, yet appear to have plenty of available memory -- look at the other counters to put the number in context.
  • Physical Disk\Disk Reads/sec and Physical Disk\Disk Writes/sec for the disk that has the page file on it - Normal operations do cause some disk activity here, but when memory over-commit happens, these counters will spike up dramatically from the baseline.

Since memory over-commit can only happen when the amount of physical memory is exhausted, the system will only become slow after a certain point. In troubleshooting, sometimes a SQL instance (or Windows itself) is restarted, and it fixes the problem for a while, only to return some time later. By now it should be obvious that this happens because after a restart, the SQL Server buffer pool is empty, and there's no possibility of over-commit until the physical memory is used up again.



Iteratively lower SQL Server's Max Server Memory setting (or initially set it to a reasonable value), and monitor the performance counters until the system falls back to a stable configuration. Because of the nature of Virtual Memory, Windows can hold on to swapped-out pages for quite a long time, so it's possible that the counters will stabilize with the page file usage at a higher level than normal. That may be okay, as when the pages are swapped back in, they will never be swapped out again, unless the settings on this iteration are still out of whack. If the initial configure was way off (default Max Server Memory setting), you may want to restart the box to start with a clean slate, because the counters will be so far out.

It seems counter-intuitive to lower the amount of memory SQL Server is able to allocate. SQL Server internally manages which sets of pages in memory are hot and cold, an insight Windows doesn't have. This means that by adjusting the Max Server Memory setting down, even though the amount of memory available to SQL Server will be less, it will still be able to perform well by keeping the most active pages in memory, and only going to physical disk occasionally for pages that aren't in the buffer pool, as opposed to potentially going to disk for any memory access.



While over-commit can never truly be prevented -- users could potentially run other applications on the SQL box that require lots of memory -- what you can put in place is an early-warning system by monitoring the PerfMon counters. Third-party software solutions should be able to help with this, particularly if you manage many servers.

Speaking of other applications, if you have any installed on the SQL box (including the third-party monitoring software I just mentioned), it's doubly important to monitor the state of affairs, as these are variables out of your control. The Max Server Memory setting and the amount of available member should be more conservative in this case.

It's also important, particularly if your SQL Server is version 2005 to 2008 R2, to ensure the Max Server Memory setting is allowing for some future growth in your environment. Because the setting doesn't encompass the plan cache, even adding an insignificantly-small database could cause over-commit if many different queries are run against it. The setting and counters should be evaluated as part of the change process. For SQL Server 2012, this is less of a concern for the reasons previously mentioned, but it can still be worth checking things out as part of your regular change process.

Finally, try to avoid letting users remote into the SQL box to do regular work or maintenance, as this can use up a tremendous amount of memory. Nearly all tasks can be accomplished remotely using SQL Server Management Studio and remotely/non-interactively using PowerShell. If your administrators' workstations aren't in the same domain as your servers, create a management box on the server domain, and remote into that instead to manage the servers.