Today we’re going to talk about SQL Server instance stacking.
Right, in production. I’m talking about DEV/TEST environments.
Settle down. If your server is set up correctly and has the resources you want it to have, and you divide your resources up per instance in a few very simple ways, it’s fine. Enough with the pitchforks, the wailing and gnashing of teeth.
Okay, now that that’s out of the way…
Remember our cute little DEV server? So, the way he’s set up is, he’s got 3 SQL Server instances on him, each with its own dedicated SSD, and another dedicated SSD just for
tempdbs. Ideally, we’d have a separate SSD for each instance’s
tempdb, but sadly, motherboards with 3 M.2 or NVMe slots aren’t (weren’t?) in production at the time, at least not for desktop class systems. But I digress.
This is called instance stacking. And yes, it’s a big no-no in production. Mostly because performance troubleshooting is a pain in the arse. But also because it’s more difficult to divvy-up resources like RAM and I/O & network throughput channels than one would like. But it’s super simple to set up — you simply run the SQL Server installer 3x, each time creating a unique instance name. Then, at the end of it, your SQL instances are addressable by
Now the time came to create a “QA” environment. Which, like DEV, didn’t need to be very performant (that’s a made-up word that consultants like to use, but it’s generally accepted in our industry so we go with it), and so, since we had some hardware laying around from a recent “up-gration” (upgrade-migration… okay, now I’m being ridiculous), we said “let’s use that thing!”. It was a 2-node cluster setup with shared
DAS storage. For the uninitiated,
DAS is Direct Attached Storage, i.e. an array of disks that you can directly attach to 1 or more servers using whatever interconnect is available on the endpoints (usually
SCSI – which is one of most fun acronyms to pronounce in IT: “scuzzy”).
DAS is not to be confused with a
SAN, Storage Area Network, which is a super fancy storage array with performance tiers and snapshot technology and de-duplication and all that hotness.
The interesting thing with a cluster is, when you install SQL Server instances, you can’t actually use the same “MachineName” for the 3 different “InstanceName”s. Because in a cluster, the former is actually the “VirtualServerName”, which must be unique per clustered instance, in order to properly configure cluster resources, storage pools, and networks.
The reason this is interesting, is that it contrasts with stacked instance setup on a standalone server (non-clustered). So if you compared our DEV and QA setups side-by-side, it’s a bit odd-ball: instead of
SQLDEV\Inst2, etc., we have instance names like
SQLQA2\Inst2, etc. That makes the ol’ “find and replace” in config files a bit harder. But, at the end of the day, it’s all just names!
Another interesting “gotcha” revolves around SQL 2008R2, which I know shouldn’t be on the short-list of versions to spin up, but unfortunately, a legacy ERP system demands it. Well, it only happened to me with the 2008R2 instance installation, not the 2016’s, but that’s not to say it couldn’t happen with others. Anyway, after installation, SQL Agent was not working; it wasn’t coming up as a cluster resource. Basically, exactly what was outlined in this timely & detailed article at mssqltips. I won’t restate the fix instructions here, just give it a read! I do want to clarify something though.
In part of the fix, we use the handy-dandy PowerShell cmdlet
Add-ClusterResourceDependency . In its basic form, it requires 2 arguments,
Provider. To someone who’s not a cluster expert, this terminology might be a bit confusing.
Resource in this case is the SQL Server Agent, while
Provider is SQL Server itself. But we’re adding a
Dependency, right? Which depends on which? Well, we know that Agent depends on the engine, so,
Resource depends on
Provider. Yes, I know, that’s what the article tells you to do — I just like to understand why.
Finally, there’s the question of divvying-up resources to the stacked clustered instances. Now, in a standard cluster, you’ve got your active node and your passive node. But if we’re stacking instances, we might as well split the SQL instances up and take advantage of the compute resources on both nodes. (Storage is still shared; this is a cluster, after all!) The CPUs are no problem — however many instances are stacked on a node, they’ll share the CPU cores pretty cooperatively. Memory is a bit of a different story. We want to take advantage of all the available RAM in the cluster, but…
As you know, you can configure each SQL instance to use a set amount of max. server memory. So let’s say each cluster node has 32GB RAM, and we’re stacking 4 SQL instances total (to keep the math easy!). If we split them up among the nodes at 2 each, each instance can use 16GB. But if for some reason a node goes down, and all 4 instances move to 1 node, now they’re fighting for that 32GB! So we should reduce their max-memory settings to 8GB each, instead of 16. But we don’t want to do this manually! Fortunately Aaron Betrand has an excellent blog post on the subject, with some useful ideas about how to do this dynamically & automatically. The only issue I have with it is that it requires the linked-servers to use a highly privileged account (
sysadmin or maybe
serveradmin role) to be able to set that max-server-memory setting. But wait, remember what we said at the beginning? This ain’t production! Who cares about security? (That’s facetious, sort of — in reality, yes, we don’t care as much about security in lower environments, but we should still care a little!)
That concludes this week’s adventure! Thanks for reading.