Standard Configuration of tempdb (Video)

The tempdb system database is used for many purposes from materializing temporary tables to storing row version information for snapshot isolation. The default configuration of tempdb may not perform well for many production workloads. In this video, three important considerations to configure tempdb for optimal performance are discussed, including a more in-depth visualization about why creating more data files can be a big advantage.

Video Resources:

Read more about sys.dm_os_waiting_tasks
Read more about sys.dm_os_wait_stats
Read more about DBCC SQLPERF
sys.dm_os_waiting_tasks query to find PAGELATCH_* waits

An advanced option I didn’t mention in the video is to enable Trace Flag 1118 (note: this trace flag is undocumented), which changes the allocation behaviour in tempdb to not use mixed extents. You can read more about this setting here.

Improve and Manage the Performance of Vendor Databases (Video)

Sometimes 3rd-party applications store their data in SQL Server, and the client is left to manage the database themselves.

In this video, I explore a few options you as a DBA have to manage the performance of the database, and offer suggestions for supported ways to improve the performance of both the database and the SQL Server instance used to contain it.

Registered Servers and Central Management Server: In Action (Video)

In a previous post/video, I introduced Registered Servers and Central Management Server, and showed how to set up the server hierarchy.

In this video, I show you how to use these features to manage groups of servers using query windows and Policy-Based Management, and we see some of the pitfalls that may be encountered along the way.

Please subscribe to the YouTube channel to be notified first when I post a new video, and to see extra content that won’t appear here on the blog.

Allowing Remote Connections to SQL Express (the Right Way) (Video)

Sometimes we need to allow remote connections to a SQL Server Express or Developer edition instance, usually on our local machine. In this video demo, I show you how to allow remote access without resorting to disabling the Windows Firewall entirely, which is an unfortunately-common solution pervading the internet.

Since there are many possible configurations (which is why I say “default” a lot in the video — sorry), and part of what I covered applies to all editions of SQL Server, a much more complete reference can be found here on MSDN.

Is my SQL Server’s memory over-committed?

As our applications’ data grows, so usually does the memory required by SQL Server to efficiently process requests for that data. Sometimes those requirements are more than the host operating system instance can handle, and we don’t find out about it until it’s too late and performance takes a nosedive. In this post, we’ll explore what memory over-commit is and why it’s a bad thing, how to mitigate the problem, and how to help prevent it from occurring in the first place.


It’s pretty obvious that memory over-commit occurs when the amount of memory required by applications exceeds the amount of physical memory available in the host operating system. (This applies equally to both physical and virtual machines. In this post, when I say “host operating system,” I mean an operating system instance that hosts SQL Server, not an operating system instance that hosts virtual machines.)

When the amount of memory required exceeds the amount of physical memory available, Windows uses disk (the page file) as a persistent store to satisfy the excess memory requirements. This is why this mechanism is called Virtual Memory — it looks like normal memory to an application, but really Windows is backing it with disk.

How does this happen? Well, first, you’ll notice that I haven’t mentioned anything directly about SQL Server. Virtual Memory is a mechanism of Windows, and so it applies to all applications that run under Windows, including SQL Server. In fact, it’s possible the system is over-committed because of memory requirements from applications other than SQL Server. Usually, though, SQL Server is the largest consumer of memory in a Windows instance, so it’s also usually responsible for causing over-commit problems.

The heart of the issue is controlling the amount of memory SQL Server is allowed to allocate. The only real influence we have over this is the Max Server Memory setting. While that might sound really concrete, the problem is that it… doesn’t actually control the total amount of memory SQL Server can allocate. On SQL Server 2005 to 2008 R2, this setting controls the maximum amount of memory used for the buffer pool only; it doesn’t include other memory pools such as the procedure cache, which can be very significant (gigabytes!) in some scenarios. SQL Server 2012 improves the state of affairs by increasing the scope of what this setting covers. While it’s still not perfect, it’s a welcome improvement to better represent what the setting actually does, and offers greater control of maximum memory utilization. In any event, the point is that this setting underestimates the amount of memory that’s going to be used (sometimes significantly, as mentioned), which can lead to unexpected over-commit.


The performance implications of backing memory with disk can be crippling: disk can be thousands of times slower than physical memory, particularly when it comes to where the Windows page file is landed, as we don’t normally put it on our fastest, most expensive storage device. Probably the worst-case scenario is when the page file is landed on a RAID 1 mirror (the typical physical machine scenario), which simply isn’t meant to handle a huge number of random reads or writes.

In order to detect when memory over-commit is happening, you’ll have to be doing Performance Monitor (PerfMon) logging, as you really won’t see anything directly in SQL Server except that things are very slow (more accurately, the wait time associated with retrieving a page from the buffer pool without physical I/O will be high). I strongly recommend setting up 24/7 PerfMon logging in your environment, and at some point I’ll write a post or record a demo video of how to set it up.

Below are the key PerfMon counters you’ll want to record to detect and troubleshoot memory over-commit. This, of course, is by no means an exhaustive list of all the counters you should be recording.

  • Paging File(_Total)\% Usage – Not surprisingly, this counter can be a dead giveaway to detect if there are issues. If it’s at any value greater than zero, you need to take a closer look at the other counters to determine if there’s a problem. Sometimes a system will be perfectly fine with a value less than 2-3% (it also depends on the size of the page file), but the higher this counter is, the more of a red flag it is. Also, watch this counter to make sure it’s stable, and not creeping up over time.
  • Memory\Available MBytes – If this number is below ~500 (in the absence of page file usage), you’re in the danger zone of over-commit. It’s recommended to keep at least this much memory available not only for unexpected SQL Server usage, but also to cover the case where administrators need to Remote Desktop into the box for some reason. User sessions take memory, so we need to keep some free for emergencies. I won’t get into the amount of memory to keep free on a SQL Server here, as that’s a discussion in itself. My point here is that if this counter is getting too low (less than ~500), you could start getting in trouble soon. I should note also that if the system is currently over-committed, this counter will reflect the amount of virtual memory provisioned, as it gets counted as available memory. So the system could be over-committed, yet appear to have plenty of available memory — look at the other counters to put the number in context.
  • Physical Disk\Disk Reads/sec and Physical Disk\Disk Writes/sec for the disk that has the page file on it – Normal operations do cause some disk activity here, but when memory over-commit happens, these counters will spike up dramatically from the baseline.

Since memory over-commit can only happen when the amount of physical memory is exhausted, the system will only become slow after a certain point. In troubleshooting, sometimes a SQL instance (or Windows itself) is restarted, and it fixes the problem for a while, only to return some time later. By now it should be obvious that this happens because after a restart, the SQL Server buffer pool is empty, and there’s no possibility of over-commit until the physical memory is used up again.


Iteratively lower SQL Server’s Max Server Memory setting (or initially set it to a reasonable value), and monitor the performance counters until the system falls back to a stable configuration. Because of the nature of Virtual Memory, Windows can hold on to swapped-out pages for quite a long time, so it’s possible that the counters will stabilize with the page file usage at a higher level than normal. That may be okay, as when the pages are swapped back in, they will never be swapped out again, unless the settings on this iteration are still out of whack. If the initial configure was way off (default Max Server Memory setting), you may want to restart the box to start with a clean slate, because the counters will be so far out.

It seems counter-intuitive to lower the amount of memory SQL Server is able to allocate. SQL Server internally manages which sets of pages in memory are hot and cold, an insight Windows doesn’t have. This means that by adjusting the Max Server Memory setting down, even though the amount of memory available to SQL Server will be less, it will still be able to perform well by keeping the most active pages in memory, and only going to physical disk occasionally for pages that aren’t in the buffer pool, as opposed to potentially going to disk for any memory access.


While over-commit can never truly be prevented — users could potentially run other applications on the SQL box that require lots of memory — what you can put in place is an early-warning system by monitoring the PerfMon counters. Third-party software solutions should be able to help with this, particularly if you manage many servers.

Speaking of other applications, if you have any installed on the SQL box (including the third-party monitoring software I just mentioned), it’s doubly important to monitor the state of affairs, as these are variables out of your control. The Max Server Memory setting and the amount of available member should be more conservative in this case.

It’s also important, particularly if your SQL Server is version 2005 to 2008 R2, to ensure the Max Server Memory setting is allowing for some future growth in your environment. Because the setting doesn’t encompass the plan cache, even adding an insignificantly-small database could cause over-commit if many different queries are run against it. The setting and counters should be evaluated as part of the change process. For SQL Server 2012, this is less of a concern for the reasons previously mentioned, but it can still be worth checking things out as part of your regular change process.

Finally, try to avoid letting users remote into the SQL box to do regular work or maintenance, as this can use up a tremendous amount of memory. Nearly all tasks can be accomplished remotely using SQL Server Management Studio and remotely/non-interactively using PowerShell. If your administrators’ workstations aren’t in the same domain as your servers, create a management box on the server domain, and remote into that instead to manage the servers.