Stupid Defaults

Every big enterprise product has them.  I just happen to pick on SQL because it’s my area of expertise.

This week’s topic was triggered by an amazing quote from Adam Machanic (b|t) in the sqlcommunity Slack group, where he was trying to help someone troubleshoot a slow linked-server query:

That default, I consider to have been checked into the SQL Server codebase directly by Satan.

satan-costume-guy-laughing
I imagine he looked something like this… Satan, not Adam.  =D

He’s referring, in this case, to the default option of remote proc transaction promotion, or “Enable Promotion of Distributed Transactions for RPC” in the GUI, which is set to True by default.  Admittedly, linked-servers are a dicey area of the technology and not everybody needs them, but when you do need to deal with them, there are some caveats to consider and a few potholes to avoid.

I won’t go into gory details, but the mile-high gist of it is that you should probably change a couple of the default linked-server settings when you’re going from MSSQL to MSSQL (which most of the time, you are): rpc and rpc out both to true, and the above dude to false.  The reasons behind that are subtle, but it boils down to usage patterns:  If you’re purely running basic SELECT statements against the remote server, you can leave this stuff alone; but if you want to do cool stuff like exec MyLinkedServer.RemoteDB.sys.sp_executesql '--some awesome dynamic-sql'​, you’ll want to change these.  (That last bit comes in especially handy if you’re building a dynamic query string, then running it against the remote server to bring in the results to a #temptable to further massage/mangle/munge said data.)

Even though you probably really shouldn’t be doing that in the database (that’s what web server farms are for!).

stack of colorful rings with arrow pointing up
push it up the stack!

So, what are some other “stupid defaults” in SQL Server?

Every big enterprise product has them.  I just happen to pick on SQL because it’s my area of expertise.  And it’s not even just “defaults”; there are some options which shouldn’t even be a thing — they should be completely and irrevocably in love with Edward removed from the product.  Yet, because the RDBMS tech space is infamously conservative and slow-to-change (the accepted euphemism is “mature“), these options and defaults have remained in the product despite our best attempts to convince MS that they’re heinous.

1. Parallelism settings (server/instance level)

Your servers have many-core CPUs, right?  And you want SQL to utilize those cores to the best of its ability, distributing the many users’ workloads fairly amongst them, yes?  Damn right, you paid $3k or more per core in freaking licensing costs!  “OK”, says SQL Server, “I’ll use all available CPUs for any query with a ‘cost’ over ‘5’“.  (To give context here, in case you’re not aware, ‘5’ is a LOW number; most OLTP workload queries are in the double to triple digits).  “But wait!”, you protest, “I have more than 1 user, obviously, and I don’t want their horrible queries bringing all CPUs to their knees and forcing the 50 other user queries to wait their turn!”

well... tough shit
-SQL Server

Hardly seems fair, no?  But those are the defaults, have been for over a decade, despite how much hardware has evolved since.  Just for completeness, I am talking about the following 2 options, which you can view by right-clicking on the instance in SSMS, or by running sys.sp_configure.  They are max degree of parallelism and cost threshold for parallelism, respectively (# cores to use, and how expensive the query should be to “go parallel” — as opposed to serial/single-threaded, which is perfectly fine for those itty-bitty-teeny-tiny queries that are so quick that 1 CPU core is more than enough to handle them).  We commonly abbreviate the former as maxDOP; the latter is less commonly abbreviated, but because I’m a sucker for acronyms, I like to call it CTFP, or maybe costFP if I’m feeling verbose.

Now obviously you can, and should, change these settings.  “To what?” you ask.  Well, the answer, as always my friend, is “It Depends ©®™“… a perennial favorite of DBAs and consultants alike.  But don’t ask me — there are plenty of people much smarter than I with blog posts on the topic.  If you put a gun to my head, for CTFP, I’d say “pick your favorite number between 50 and 100, start there, and test to see if it makes your server CPUs happy or sad”.  And for maxDOP I’d say “divide your # of CPU cores by 2 (if you have 8 or fewer) or 4 (if you have 16 or more)”.

And if you have somewhere between 9 and 15 CPU cores, don’t ask me, because you’re running some.. interesting hardware.  Figure it out yourself, and get that gun out of my face!

OK, I know 12 cores is probably not an unreasonable config, so.. extrapolating my logic above, divide by.. 3?  Yeah let’s go with that.

squirrel with "ship it" flag
ship it good

2. Auto Close (database option)

It’s at the top of the list in the GUI under Database Properties -> Options.  Yet nobody knows why you would ever enable it.  I once heard a community member theorize that it might have been put into the product back when it was considered a viable “local persistence option for Windows CE/Mobile apps” and it would help w/ resource management on said mobile device by freeing up resources when the DB wasn’t in-use.  Well, we all know how well that product line did in the market (hint: poorly).  There are so many better options for localized data stores in mobile dev, MS isn’t even a blip in the conversation.  (Again, talking local data persistence.)

If we’re talking cloud, MS is a big part of that conversation — Azure is amazing, and a solid competitor to AWS & GCP.

Anyway, if you ever find a SQL DB with the auto_close option enabled, find the person responsible, and slap them with a trout.

stick figure chasing another with purple trout
I will find you, and I will trout-slap you…

3. Server Max Memory

Last one for today.  This is something that’s not completely heinous, but could use a LOT more built-in intelligence during the installation process so that DBAs & SysAdmins didn’t need to think about it so much.  SQL Server will, by default, sets its max-memory to some-odd-billion-MBs (technically it’s the max value of a 32-bit int, which the more geeky among you have probably memorized), which is of course some-odd-million-GBs, which is more than even the most bleeding-edge servers have to date.  Which is fine in theory — you paid a crap-ton of money for this system, it might as well use up all the RAM that it can to perform to its potential, right?

all your ram are belong to us
in AD 2101…

Right.  Until you realize that “Oh wait, it’s running inside an OS” (whether that’s Windows or Linux, thanks to 2016 & 2017 product versions) — that that OS needs some RAM too, to keep itself running!  (Cue the Linux zealots with their “it’s so much more lightweight than Windoze, haha!!1” — yeah, well you still need some memory space, don’a ya?)

Here’s what I’d like to see, in my ideal world.  During SQL Server installation, it would detect how much RAM is on the system, subtract about 10% or 4-8 GB, leave that for the OS, and use the resulting number as its limit.  Boom, done, nobody has to think about configuring it and checking off another checkbox on their setup checklist.

But noooo… The vaunted MSSQL engineers can built all sorts of amazing things into the product like QueryStore, Adaptive Query Processing, and The Artist Formerly Known as Hekaton, but heaven forbid we get a little more intelligence in the installer.  It got a lot  better with 2016 when it let you configure tempDB reasonably correctly (multiple files, different locations, etc), but there’s still a LOT that could use some lurv.

two kitties cuddling
because KITTEHS!!

Do you have a favorite “stupid default” or “horrible setting” related to SQL Server or any other technology that you work with?  Share in the comments!

Reverse Engineering SSAS Reports

MDX is not SQL. It may look like it has SELECT/FROM/WHERE clauses, but god help you if you start drawing parallels to your standard TSQL query.

This is an exercise I had to go through recently, because A) the reports in question were deployed in SSRS but used an SSAS backing, i.e. cubes, and the source queries (MDX) were not stored in source-control, and B) I don’t write MDX queries.

a basic MDX query example with labeled parts
shamelessly borrowed from an excellent article

The outline:

  1. Run Profiler or XEvents against the SSAS server
    1. set to capture “Query Begin” events only, with “Event Subtype = 0” (for MDX query)
    2. optionally, set filter on NTUserName to the dedicated SSRS account (if you have it set up that way)
  2. Run the SSRS report(s) that you want to dive into
  3. For each event in the Trace, copy-paste the MDX query to a new MDX editor window
  4. SSRS parameter substitution happens via some XML at the bottom; but in MDX, the parameters are standard @params like in T-SQL.  So we need to manually substitute our parameter values.
    1. 2 blocks of XML: the “Parameters”, and the “PropertyList” — delete the latter.
    2. In the former, text-replace & for simply & .
    3. Side-bar: You’ll notice that the MDX parameters are usually inside STRTOMEMBER() or STRTOSET(), which are built-in MDX functions that do exactly what they sound like — parse a string into a dimension’s attribute’s member or set of members.  That’s why they’ll usually have at least 3 .‘s (dots) — Dimension.Attribute.&MemberValue, for example.  I’m grossly oversimplifying that because it’s beyond the scope of this post, but read the docs if you need more gritty details.
  5. For each parameter node:
    1. Copy/cut the <Value> node content (I like to ‘cut’ because it helps me keep track of which ones I’ve done already)
    2. Find-and-Replace @ParameterName with that Value node’s content, surrounded in single-quotes
    3. Example: we have parameter ​@ReportDate (in MDX), corresponding to <Parameter><Name>ReportDate</Name> in XML, with <Value xsi:type="xsd:string">[Some Dimension].[Some Attribute].[Some sub-attribute].&[2017-05-01T00:00:00]</Value> — where that last bit is a standard SQL datetime literal.
    4. So you replace @ReportDate with '[Some Dimension].[Some Attribute].[Some sub-attribute].&[2017-05-01T00:00:00]' .
  6. Delete the XML block.
  7. Boom, now you have a valid MDX query that you can run and view results.

Why do this?  Well, it can help you learn MDX from a working example, instead of from super-basic dummy examples.  That’s not always a good learning style — you should still learn the fundamentals of MDX and why it’s so very different from SQL.  Especially if you’ll be responsible for writing and maintaining more than a few MDX queries.  But, in a pinch, if you need to start somewhere, and possibly all that the MDX / overlaying report needs is a slight tweak, this may be enough to get you going.

Lessons learned:

  • Profiler can still be a useful tool, despite some people’s attempts to kill it.
  • MDX is not SQL.  It may look like it has select, from, and where clauses, but god help you if you start drawing parallels to your standard TSQL query.
  • SSRS does parameter-passing in an odd way.
  • SSAS & MDX are fascinating and I need to learn more about them!
the-more-you-know
Canadians, eh?

Quickie: SQL DB Role Members

A typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for…

Just a brief post on adding/removing users (database level users) to/from roles (database level roles).  It’s relevant because several shops are still stuck supporting at least a few 2008 (or hopefully, 2008R2) instances, and there is a key difference between those and newer (2012 & up) versions in the “preferred” method of doing this security task.

security seal for ur protection
He’s wearing a police hat, he must know what he’s doing…
There are reams of documentation and books and articles written about SQL security in general.  That is beyond the scope of this post (and indeed, beyond the scope of any single blog, unless you’re an SME on the subject!).  But a typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for, when they’re planning/deploying a new app against their SQL DB(s).

The “old” method involves calling system stored-procedures, sp_addrolemember and sp_droprolemember, in which you pass the role-name and username.  The “new” method, supported starting with SQL 2012, is to use the command-phrases ALTER ROLE [role] ADD MEMBER [user], and ALTER ROLE [role] DROP MEMBER [user].

The latter is more ‘standard‘, while the former is more ‘Microsoft-y‘.  I couldn’t easily find whether it’s part of the official ANSI standard or not… that’s an exercise for the reader.  What I find very interesting is that Azure’s data warehouse offerings require the old method.  Of course, hopefully in a DW setting you’re not messing with security nearly as much as a typical OLTP system, but… yeah.

Does that mean those Azure services are built on top of older SQL engine versions?  Possibly.  MSFT isn’t too open about the deep internals of such tech, but neither is any other cloud vendor, so we can’t really ask them such a question and expect anything more than a blank-stare.  But it is curious, no?

fry not sure if curious or suspicious
Exactly.
Syntax examples:  Let’s add the user foo to the database Bard, in the db_datareader built-in role.  Then we’ll remove him.  (Or her, I guess; “foo” is a pretty gender-neutral name.)  Creating said user is easy, so I’ll start with that, and it’s the same in all supported versions.  You need a server-level login to link it to; if you don’t have one, I’ll show you how to create it first.

Create server-level login:

--preferably, you create a login for an existing AD/Windows account:
CREATE LOGIN [yourdomain\foo] FROM WINDOWS;
--or, you can just create a SQL login (not connected to domain/Windows/ActiveDirectory; also less secure, as discussed here and here)
CREATE LOGIN [foo] WITH PASSWORD = 'foobar';

Create database-level user:

USE Bard;
--if you made the domain/Windows login:
CREATE USER [foo] FOR LOGIN [yourdomain\foo];
--or, if you just made the SQL login:
CREATE USER [foo] FOR LOGIN [foo];

Now the role-membership.

Old way:

  1. Add user to role:
    • exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';
  2. Check that it worked:
    • exec Bard.sys.sp_helprolemember
          @rolename = 'db_datareader'
    • It will show something like this:
      db_datareader with member 'foo'
  3. Remove user from role:
    1. exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';

New way (step 2, the “check”, is the same)

  1. Add user to role:
    • USE Bard;
      ALTER ROLE db_datareader ADD MEMBER [foo];
  2. Check (see above)
  3. Remove user from role:
    • USE Bard;
      ALTER ROLE db_datareader DROP MEMBER [foo];

Yay.

Notice that, because the “old way” is simply executing sys-sp’s, we can actually run it from any database context.  Whereas the “new way” requires you to connect to the database in question.

Note: I am in no way shape or form responsible for you screwing up your database or SQL instance, nor for you getting yelled at by your DBA or security admin or any other form of verbal assault you may incur as a result of running these commands.  But since you need server-admin & database-owner equivalent permissions anyway, you’re probably one of those people already, so you’ll just end up yelling at yourself.

no guarantees
No substitutions, exchanges, or refunds.
Cleanup (just so you don’t muddy your instance/DB up with a silly example user):

USE Bard;
DROP USER [foo];
USE master;
DROP LOGIN [foo];

If you have any questions, feel free to reach out to me!

 

Adventures in SQL Cluster Stacked Instances

Enough with the pitchforks; this is Test/QA. Here, I talk about 3 gotchas.

Today we’re going to talk about SQL Server instance stacking.

blasphemy-300
Blasphemy of the highest order!

Right, in production.  I’m talking about DEV/TEST environments.

Still, blasphemy!

Settle down.  If your server is set up correctly and has the resources you want it to have, and you divide your resources up per instance in a few very simple ways, it’s fine.  Enough with the pitchforks, the wailing and gnashing of teeth.

okay-okay-calm-down
Stop scaring the pandas.

Okay, now that that’s out of the way…

Remember our cute little DEV server?  So, the way he’s set up is, he’s got 3 SQL Server instances on him, each with its own dedicated SSD, and another dedicated SSD just for tempdbs.  Ideally, we’d have a separate SSD for each instance’s tempdb, but sadly, motherboards with 3 M.2 or NVMe slots aren’t (weren’t?) in production at the time, at least not for desktop class systems.  But I digress.

This is called instance stacking.  And yes, it’s a big no-no in production.  Mostly because performance troubleshooting is a pain in the arse.  But also because it’s more difficult to divvy-up resources like RAM and I/O & network throughput channels than one would like.  But it’s super simple to set up — you simply run the SQL Server installer 3x, each time creating a unique instance name.  Then, at the end of it, your SQL instances are addressable by MachineName\InstanceName, e.g. SQLDEV\Foo, SQLDEV\Bar, etc

Now the time came to create a “QA” environment.  Which, like DEV, didn’t need to be very performant (that’s a made-up word that consultants like to use, but it’s generally accepted in our industry so we go with it), and so, since we had some hardware laying around from a recent “up-gration” (upgrade-migration… okay, now I’m being ridiculous), we said “let’s use that thing!”.  It was a 2-node cluster setup with shared DAS storage.  For the uninitiated, DAS is Direct Attached Storage, i.e. an array of disks that you can directly attach to 1 or more servers using whatever interconnect is available on the endpoints (usually SAS, serial-attached SCSI  – which is one of most fun acronyms to pronounce in IT: “scuzzy”).  DAS is not to be confused with a SAN, Storage Area Network, which is a super fancy storage array with performance tiers and snapshot technology and de-duplication and all that hotness.

NAS-SAN-DAS-diagram
NAS, SAN, DAS – 3 acronyms, 1 underlying purpose, 3 implementations.

The interesting thing with a cluster is, when you install SQL Server instances, you can’t actually use the same “MachineName” for the 3 different “InstanceName”s.  Because in a cluster, the former is actually the “VirtualServerName”, which must be unique per clustered instance, in order to properly configure cluster resources, storage pools, and networks.

The reason this is interesting, is that it contrasts with stacked instance setup on a standalone server (non-clustered).  So if you compared our DEV and QA setups side-by-side, it’s a bit odd-ball: instead of SQLDEV\Inst1, SQLDEV\Inst2, etc., we have instance names like SQLQA1\Inst1, SQLQA2\Inst2, etc.  That makes the ol’ “find and replace” in config files a bit harder.  But, at the end of the day, it’s all just names!

find-and-replace-diallog
One of the handiest tools in an engineer’s toolbox!

Another interesting “gotcha” revolves around SQL 2008R2, which I know shouldn’t be on the short-list of versions to spin up, but unfortunately, a legacy ERP system demands it.  Well, it only happened to me with the 2008R2 instance installation, not the 2016’s, but that’s not to say it couldn’t happen with others.  Anyway, after installation, SQL Agent was not working; it wasn’t coming up as a cluster resource.  Basically, exactly what was outlined in this timely & detailed article at mssqltips.  I won’t restate the fix instructions here, just give it a read!  I do want to clarify something though.

In part of the fix, we use the handy-dandy PowerShell cmdlet Add-ClusterResourceDependency .  In its basic form, it requires 2 arguments, Resource and Provider.  To someone who’s not a cluster expert, this terminology might be a bit confusing.  Resource in this case is the SQL Server Agent, while Provider is SQL Server itself.  But we’re adding a Dependency, right?  Which depends on which?  Well, we know that Agent depends on the engine, so, Resource depends on Provider.  Yes, I know, that’s what the article tells you to do — I just like to understand why.

can-you-tell-me-why
Fry shares my curiosity…

Finally, there’s the question of divvying-up resources to the stacked clustered instances.  Now, in a standard cluster, you’ve got your active node and your passive node.  But if we’re stacking instances, we might as well split the SQL instances up and take advantage of the compute resources on both nodes.  (Storage is still shared; this is a cluster, after all!)  The CPUs are no problem — however many instances are stacked on a node, they’ll share the CPU cores pretty cooperatively.  Memory is a bit of a different story.  We want to take advantage of all the available RAM in the cluster, but…

As you know, you can configure each SQL instance to use a set amount of max. server memory.  So let’s say each cluster node has 32GB RAM, and we’re stacking 4 SQL instances total (to keep the math easy!).  If we split them up among the nodes at 2 each, each instance can use 16GB.  But if for some reason a node goes down, and all 4 instances move to 1 node, now they’re fighting for that 32GB!  So we should reduce their max-memory settings to 8GB each, instead of 16.  But we don’t want to do this manually!  Fortunately Aaron Betrand has an excellent blog post on the subject, with some useful ideas about how to do this dynamically & automatically.  The only issue I have with it is that it requires the linked-servers to use a highly privileged account (sysadmin or maybe serveradmin role) to be able to set that max-server-memory setting.  But wait, remember what we said at the beginning?  This ain’t production!  Who cares about security?  (That’s facetious, sort of — in reality, yes, we don’t care as much about security in lower environments, but we should still care a little!)

when-i-forget-my-password-is-incorrect
We all do silly things from time to time..

That concludes this week’s adventure!  Thanks for reading.