Hacking RedGate DLM Dashboard

This is a work-in-progress, but I thought I’d put it out there because it may be interesting to others who use the tool.

This is a “teaser” in that I’m not done writing the content yet, but I wanted to get it out there as “something I’m working on”.  The problem at-hand is that we have a table trigger which is constantly being disabled/enabled by a stored-proc , which in turn controls the only acceptable method of updating the columns that would otherwise be forbidden from update by said trigger.  Clear as mud?  K.  Basically there’s a schema change (“drift”) of a certain specific type that I want DLM Dashboard to always ignore on this DB.

Outline:

  1. Basic architecture
    1. RavenDB back-end
    2. SQL db, tables, procs, & triggers (“installation”)
  2. How it does event tracking
    1. Jessica = awesome support agent
  3. Modifying the event-reader to exclude app-names and other things
    1. XML event data
    2. XML indexes to help filter queries
    3. https://www.simple-talk.com/sql/database-administration/getting-started-with-xml-indexes/
  4. Schema-compare files to filter results
  5. Wholesale ignoring of a change-set (if possible)

Some code:

USE RedGate;
EXEC dbo.RG_SQLLighthouse_ReadEvents;

SELECT TOP 100 * FROM RedGate.SQLLighthouse.DDL_Events de
ORDER BY de.PostTime

SELECT TOP 100 * FROM RedGate.SQLLighthouse.DDL_Events de
WHERE de.appname NOT IN ('SQLServerCEIP', 'Spotlight Diagnostic Server (Monitoring)')
AND de.appname NOT LIKE 'SQLAgent%'
ORDER BY de.PostTime DESC

ALTER TABLE SQLLighthouse.DDL_Events DROP CONSTRAINT PK__DDL_Even__3213E83FB62EF9A3;

ALTER TABLE SQLLighthouse.DDL_Events ADD CONSTRAINT PK_DDL_Events PRIMARY KEY NONCLUSTERED (id);

CREATE CLUSTERED INDEX CX_DDL_Events_PostTime ON SQLLighthouse.DDL_Events
 (PostTime);

CREATE INDEX IX_DDL_Events_SPID ON RedGate.SQLLighthouse.DDL_Events
 (spid)
 INCLUDE (transaction_id, options, nestlevel);
CREATE INDEX IX_DDL_Events_AppName ON RedGate.SQLLighthouse.DDL_Events
 (appname)
 INCLUDE (client_net_address);

CREATE PRIMARY XML INDEX XI_DDL_Events_EventData ON SQLLighthouse.DDL_Events
 ([eventdata]);

CREATE XML INDEX XI_DDL_Events_EvenData_PATH ON SQLLighthouse.DDL_Events
 ([eventdata])
USING XML INDEX XI_DDL_Events_EventData
 FOR PATH;

CREATE XML INDEX XI_DDL_Events_EvenData_PROPERTY ON SQLLighthouse.DDL_Events
 ([eventdata])
USING XML INDEX XI_DDL_Events_EventData
 FOR PROPERTY;

TBD, stay tuned!

Stupid Defaults

Every big enterprise product has them.  I just happen to pick on SQL because it’s my area of expertise.

This week’s topic was triggered by an amazing quote from Adam Machanic (b|t) in the sqlcommunity Slack group, where he was trying to help someone troubleshoot a slow linked-server query:

That default, I consider to have been checked into the SQL Server codebase directly by Satan.

satan-costume-guy-laughing
I imagine he looked something like this… Satan, not Adam.  =D

He’s referring, in this case, to the default option of remote proc transaction promotion, or “Enable Promotion of Distributed Transactions for RPC” in the GUI, which is set to True by default.  Admittedly, linked-servers are a dicey area of the technology and not everybody needs them, but when you do need to deal with them, there are some caveats to consider and a few potholes to avoid.

I won’t go into gory details, but the mile-high gist of it is that you should probably change a couple of the default linked-server settings when you’re going from MSSQL to MSSQL (which most of the time, you are): rpc and rpc out both to true, and the above dude to false.  The reasons behind that are subtle, but it boils down to usage patterns:  If you’re purely running basic SELECT statements against the remote server, you can leave this stuff alone; but if you want to do cool stuff like exec MyLinkedServer.RemoteDB.sys.sp_executesql '--some awesome dynamic-sql'​, you’ll want to change these.  (That last bit comes in especially handy if you’re building a dynamic query string, then running it against the remote server to bring in the results to a #temptable to further massage/mangle/munge said data.)

Even though you probably really shouldn’t be doing that in the database (that’s what web server farms are for!).

stack of colorful rings with arrow pointing up
push it up the stack!

So, what are some other “stupid defaults” in SQL Server?

Every big enterprise product has them.  I just happen to pick on SQL because it’s my area of expertise.  And it’s not even just “defaults”; there are some options which shouldn’t even be a thing — they should be completely and irrevocably in love with Edward removed from the product.  Yet, because the RDBMS tech space is infamously conservative and slow-to-change (the accepted euphemism is “mature“), these options and defaults have remained in the product despite our best attempts to convince MS that they’re heinous.

1. Parallelism settings (server/instance level)

Your servers have many-core CPUs, right?  And you want SQL to utilize those cores to the best of its ability, distributing the many users’ workloads fairly amongst them, yes?  Damn right, you paid $3k or more per core in freaking licensing costs!  “OK”, says SQL Server, “I’ll use all available CPUs for any query with a ‘cost’ over ‘5’“.  (To give context here, in case you’re not aware, ‘5’ is a LOW number; most OLTP workload queries are in the double to triple digits).  “But wait!”, you protest, “I have more than 1 user, obviously, and I don’t want their horrible queries bringing all CPUs to their knees and forcing the 50 other user queries to wait their turn!”

well... tough shit
-SQL Server

Hardly seems fair, no?  But those are the defaults, have been for over a decade, despite how much hardware has evolved since.  Just for completeness, I am talking about the following 2 options, which you can view by right-clicking on the instance in SSMS, or by running sys.sp_configure.  They are max degree of parallelism and cost threshold for parallelism, respectively (# cores to use, and how expensive the query should be to “go parallel” — as opposed to serial/single-threaded, which is perfectly fine for those itty-bitty-teeny-tiny queries that are so quick that 1 CPU core is more than enough to handle them).  We commonly abbreviate the former as maxDOP; the latter is less commonly abbreviated, but because I’m a sucker for acronyms, I like to call it CTFP, or maybe costFP if I’m feeling verbose.

Now obviously you can, and should, change these settings.  “To what?” you ask.  Well, the answer, as always my friend, is “It Depends ©®™“… a perennial favorite of DBAs and consultants alike.  But don’t ask me — there are plenty of people much smarter than I with blog posts on the topic.  If you put a gun to my head, for CTFP, I’d say “pick your favorite number between 50 and 100, start there, and test to see if it makes your server CPUs happy or sad”.  And for maxDOP I’d say “divide your # of CPU cores by 2 (if you have 8 or fewer) or 4 (if you have 16 or more)”.

And if you have somewhere between 9 and 15 CPU cores, don’t ask me, because you’re running some.. interesting hardware.  Figure it out yourself, and get that gun out of my face!

OK, I know 12 cores is probably not an unreasonable config, so.. extrapolating my logic above, divide by.. 3?  Yeah let’s go with that.

squirrel with "ship it" flag
ship it good

2. Auto Close (database option)

It’s at the top of the list in the GUI under Database Properties -> Options.  Yet nobody knows why you would ever enable it.  I once heard a community member theorize that it might have been put into the product back when it was considered a viable “local persistence option for Windows CE/Mobile apps” and it would help w/ resource management on said mobile device by freeing up resources when the DB wasn’t in-use.  Well, we all know how well that product line did in the market (hint: poorly).  There are so many better options for localized data stores in mobile dev, MS isn’t even a blip in the conversation.  (Again, talking local data persistence.)

If we’re talking cloud, MS is a big part of that conversation — Azure is amazing, and a solid competitor to AWS & GCP.

Anyway, if you ever find a SQL DB with the auto_close option enabled, find the person responsible, and slap them with a trout.

stick figure chasing another with purple trout
I will find you, and I will trout-slap you…

3. Server Max Memory

Last one for today.  This is something that’s not completely heinous, but could use a LOT more built-in intelligence during the installation process so that DBAs & SysAdmins didn’t need to think about it so much.  SQL Server will, by default, sets its max-memory to some-odd-billion-MBs (technically it’s the max value of a 32-bit int, which the more geeky among you have probably memorized), which is of course some-odd-million-GBs, which is more than even the most bleeding-edge servers have to date.  Which is fine in theory — you paid a crap-ton of money for this system, it might as well use up all the RAM that it can to perform to its potential, right?

all your ram are belong to us
in AD 2101…

Right.  Until you realize that “Oh wait, it’s running inside an OS” (whether that’s Windows or Linux, thanks to 2016 & 2017 product versions) — that that OS needs some RAM too, to keep itself running!  (Cue the Linux zealots with their “it’s so much more lightweight than Windoze, haha!!1” — yeah, well you still need some memory space, don’a ya?)

Here’s what I’d like to see, in my ideal world.  During SQL Server installation, it would detect how much RAM is on the system, subtract about 10% or 4-8 GB, leave that for the OS, and use the resulting number as its limit.  Boom, done, nobody has to think about configuring it and checking off another checkbox on their setup checklist.

But noooo… The vaunted MSSQL engineers can built all sorts of amazing things into the product like QueryStore, Adaptive Query Processing, and The Artist Formerly Known as Hekaton, but heaven forbid we get a little more intelligence in the installer.  It got a lot  better with 2016 when it let you configure tempDB reasonably correctly (multiple files, different locations, etc), but there’s still a LOT that could use some lurv.

two kitties cuddling
because KITTEHS!!

Do you have a favorite “stupid default” or “horrible setting” related to SQL Server or any other technology that you work with?  Share in the comments!

Reports for Auditors

This is how code quality dies…

Look, I get it, audits and auditors are a necessary evil.  But the sheer volume of crappy code, hack-y scripts, man-hours, and alcohol, dedicated to meeting the often arbitrary, sometimes ridiculous, and possibly even downright obscene requirements, of said auditors, is staggering.  There are entire product suites dedicate to help you “report on” or “meet/pass” audits of various kinds.  And thank god for those.  Because without them, we’d all be slightly more insane than we already are.

I’m not crazy; my mother had me tested.  And my rubber duck agrees.

Here’s an example.

In one of our main applications, at the database level (SQL), we’ve implemented poor-man’s data-change-tracking via table triggers.  In a past audit, there was a “flag” on an unauthorized data change.  As a result, an “exception report” was created, which basically produces a daily CSV dump of those changes and emails it to a few managers.  Now, because the tracking table includes “who” as well as “when” & “what”, and there are certain privileged accounts that can be ignored for auditing, and because there are dozens of application users whose app-controlled changes are also logged in said tracking table (and need to be ignored), the “report” writer at the time decided it was a wonderful (read: terrible) idea to hard-code a list of usernames into the query that produces this data-dump.

That is horrible.  Never do that!

using hard-coded values is bad
Mr. Garrison knows…

This query, the job that scheduled it, the data that it produced, and the emails that it sent, were subsequently filed away and silently forgotten about.  Until we did some environment cleanup and discovered “Oh look, here’s a thing that’s not doing anything” (because the hard-coded list of users was now woefully out of date), and the same managers who received the reports for years past, gave the OK to disable the job!

What could possibly go wrong?

Then the next audit season comes around.  And guess who comes running back to the DBA having changed his mind?  Why yes, that very same manager!

i dont always say i told you so
Stay shortsighted, my friends…

So I tell him what we need to do to fix it — remove the hard-coded list, replace it with an AD group or at least a look-up table that can be periodically updated with “current” users in the desired departments/teams.  We do that, and the CSV dump goes from zero to huge.  As in, too huge for email.  So we try to pare it down.  We get it to a manageable size.  Then we get ready to deploy the changes and confirm the desired email recipients.

  • Boss: “Oh it doesn’t really matter, we just ignore the emails anyway… it’s just to satisfy an audit requirement.”
  • Me: “So why did we even bother fixing it?”
  • Boss: “Well, it was broken!”
  • Me: “…”

I’d like the last hour of my life back please.

stress-reduction-kit-bang-head-here
It works, I sw…

Here’s another fun one.

The ERP system has a rickety tack-on audit-trail system involving some SQL Agent Jobs and an obscenely large data repository for storing all the changes that the ERP system users make every hour of every day.  Thankfully, somebody along the way was prescient enough to partition those storage tables using a partitioned view scheme.  But apparently there were NOT smart enough to realize that the clustered index needs to be the datetime column.  Yes, it was in the primary key, but that PK was not clustered (nor should it have been, since the actual unique keys were GUIDs, but this is a prime example of making the clustering key different and separate from the PK.

Obviously enough, in a date-driven partitioned view scheme, that date does need to be involved in the PK also.  But again, the best clustered index is just that date, because A) it’s the partition divider, and B) it’s always your queries’ main filter-predicate (which is a fancy way of saying “the thing in your JOIN/WHERE clauses”).

Once again, audit season comes around, and someone needs those reports driven by these audit-trail repos.  And guess what?  They’re slow as molasses.

you don't say
Nicholas Cage is apparently quite the flexible meme fodder..

So we get to spend another few hours applying the correct clustered indexes, waiting for them to rebuild, testing the queries for the report, and finally running the report itself.  Oh and don’t forget the down-time of the jobs, and having to “catch up” with all the changes that were made in the ERP system while we were working on all this.

Is it 5 o’clock yet?

In conclusion.

Audits suck.  But please, stop producing crappy code in response.  It just makes the next guy/gal that comes after you even more frustrated than he/she already is with the whole process.  Practice the “boy scout principle” — leave things at least a little better than you found them.  And if you’re forced to produce a ridiculous “exception report” that nobody will ever read, spend as little time on it as possible while still making it less heinous than the last terribly designed monstrosity.

oh my look its beer-o-clock
That clock might be slightly ahead… but we’re gonna go with it.