This week’s topic was triggered
by an amazing quote from Adam Machanic (b|t) in the sqlcommunity Slack group, where he was trying to help someone troubleshoot a slow linked-server query:
That default, I consider to have been checked into the SQL Server codebase directly by Satan.
He’s referring, in this case, to the default option of remote proc transaction promotion
, or “Enable Promotion of Distributed Transactions for RPC” in the GUI, which is set to True by default. Admittedly, linked-servers are a dicey area of the technology and not everybody needs them, but when you do need to deal with them, there are some caveats to consider and a few potholes to avoid.
I won’t go into gory details, but the mile-high gist of it is that you should probably change a couple of the default linked-server settings when you’re going from MSSQL
to MSSQL
(which most of the time, you are): rpc
and rpc out
both to true
, and the above dude to false
. The reasons behind that are subtle, but it boils down to usage patterns: If you’re purely running basic SELECT
statements against the remote server, you can leave this stuff alone; but if you want to do cool stuff like exec MyLinkedServer.RemoteDB.sys.sp_executesql '--some awesome dynamic-sql'
, you’ll want to change these. (That last bit comes in especially handy if you’re building a dynamic query string, then running it against the remote server to bring in the results to a #temptable
to further massage/mangle/munge said data.)
Even though you
probablyreally shouldn’t be doing that in the database (that’s what web server farms are for!).
So, what are some other “stupid defaults” in SQL Server?
Every big enterprise product has them. I just happen to pick on SQL because it’s my area of expertise. And it’s not even just “defaults”; there are some options which shouldn’t even be a thing — they should be completely and irrevocably in love with Edward removed from the product. Yet, because the RDBMS tech space is infamously conservative and slow-to-change (the accepted euphemism is “mature“), these options and defaults have remained in the product despite our best attempts to convince MS that they’re heinous.
1. Parallelism settings (server/instance level)
Your servers have many-core CPUs, right? And you want SQL to utilize those cores to the best of its ability, distributing the many users’ workloads fairly amongst them, yes? Damn right, you paid $3k or more per core in freaking licensing costs! “OK”, says SQL Server, “I’ll use all available CPUs for any query with a ‘cost’ over ‘5’“. (To give context here, in case you’re not aware, ‘5’ is a LOW number; most OLTP workload queries are in the double to triple digits). “But wait!”, you protest, “I have more than 1 user, obviously, and I don’t want their horrible queries bringing all CPUs to their knees and forcing the 50 other user queries to wait their turn!”
Hardly seems fair, no? But those are the defaults, have been for over a decade, despite how much hardware has evolved since. Just for completeness, I am talking about the following 2 options, which you can view by right-clicking on the instance in SSMS, or by running sys.sp_configure
. They are max degree of parallelism
and cost threshold for parallelism
, respectively (# cores to use, and how expensive the query should be to “go parallel” — as opposed to serial/single-threaded, which is perfectly fine for those itty-bitty-teeny-tiny queries that are so quick that 1 CPU core is more than enough to handle them). We commonly abbreviate the former as maxDOP
; the latter is less commonly abbreviated, but because I’m a sucker for acronyms, I like to call it CTFP
, or maybe costFP
if I’m feeling verbose.
Now obviously you can, and should, change these settings. “To what?” you ask. Well, the answer, as always my friend, is “It Depends ©®™“… a perennial favorite of DBAs and consultants alike. But don’t ask me — there are plenty of people much smarter than I with blog posts on the topic. If you put a gun to my head, for CTFP
, I’d say “pick your favorite number between 50 and 100, start there, and test to see if it makes your server CPUs happy or sad”. And for maxDOP
I’d say “divide your # of CPU cores by 2 (if you have 8 or fewer) or 4 (if you have 16 or more)”.
And if you have somewhere between 9 and 15 CPU cores, don’t ask me, because you’re running some.. interesting hardware. Figure it out yourself, and get that gun out of my face!
OK, I know 12 cores is probably not an unreasonable config, so.. extrapolating my logic above, divide by.. 3? Yeah let’s go with that.
2. Auto Close (database option)
It’s at the top of the list in the GUI under Database Properties -> Options. Yet nobody knows why you would ever enable it. I once heard a community member theorize that it might have been put into the product back when it was considered a viable “local persistence option for Windows CE/Mobile apps” and it would help w/ resource management on said mobile device by freeing up resources when the DB wasn’t in-use. Well, we all know how well that product line did in the market (hint: poorly). There are so many better options for localized data stores in mobile dev, MS isn’t even a blip in the conversation. (Again, talking local data persistence.)
If we’re talking cloud, MS is a big part of that conversation — Azure is amazing, and a solid competitor to AWS & GCP.
Anyway, if you ever find a SQL DB with the auto_close
option enabled, find the person responsible, and slap them with a trout.
3. Server Max Memory
Last one for today. This is something that’s not completely heinous, but could use a LOT more built-in intelligence during the installation process so that DBAs & SysAdmins didn’t need to think about it so much. SQL Server will, by default, sets its max-memory to some-odd-billion-MBs (technically it’s the max value of a 32-bit int, which the more geeky among you have probably memorized), which is of course some-odd-million-GBs, which is more than even the most bleeding-edge servers have to date. Which is fine in theory — you paid a crap-ton of money for this system, it might as well use up all the RAM that it can to perform to its potential, right?
Right. Until you realize that “Oh wait, it’s running inside an OS” (whether that’s Windows or Linux, thanks to 2016 & 2017 product versions) — that that OS needs some RAM too, to keep itself running! (Cue the Linux zealots with their “it’s so much more lightweight than Windoze, haha!!1” — yeah, well you still need some memory space, don’a ya?)
Here’s what I’d like to see, in my ideal world. During SQL Server installation, it would detect how much RAM is on the system, subtract about 10% or 4-8 GB, leave that for the OS, and use the resulting number as its limit. Boom, done, nobody has to think about configuring it and checking off another checkbox on their setup checklist.
But noooo… The vaunted MSSQL engineers can built all sorts of amazing things into the product like QueryStore, Adaptive Query Processing, and The Artist Formerly Known as Hekaton, but heaven forbid we get a little more intelligence in the installer. It got a lot better with 2016 when it let you configure tempDB
reasonably correctly (multiple files, different locations, etc), but there’s still a LOT that could use some lurv.
Do you have a favorite “stupid default” or “horrible setting” related to SQL Server or any other technology that you work with? Share in the comments!