Clean code, the SQL: Part 2: Electric Boogaloo

After the previous discussion about nested views, encapsulation & abstraction, I’d like to write about duplication specifically, and the distinction between actual lines of code being duplicated, versus functional duplication. Because the latter is not OK, but the former is generally acceptable when it’s boilerplate code, or done, again, in the name of performance and efficiency.

boilerplate with alphabet and stuff
letters and numbers and symbols!

 

So, to expand on last week’s “Encapsulation & Abstraction” segment.  The conversation with one of my favorite developers went something like this.

Developer:

While I agree that there’s some over-reliance on nested views, the reason they get implemented a lot is because there’s a particular problem they seem to easily solve: how to encapsulate business-data rules without violating DRY.

Let’s say we have a biz-rule for a “core segment” of data.  For simplicity’s sake, let’s call that rule “Widget A consists of a Widget record and a WidgetSupplement record joined by WidgetID, where Widget.WidgetType is ‘foo’.”  So it seems obvious to create a view WidgetFooComplete, which pulls in the data from both tables and applies the type condition.  This creates a sort of “atomic building block” of data, which can be consumed by apps & data-access methods repeatedly & consistently.

Now, most downstream apps will use that WidgetFooComplete data in its entirety (or nearly).  But let’s say there’s a hot new app that needs some more data about the Widgets, and it needs to go out to the WidgetMoarProperties table.  The natural inclination is to incorporate our existing “building block” view, WidgetFooComplete, into a new view for this app & its dependencies, and call it WidgetFooMoarComplete.

But what’s the alternative?  If we re-create the JOIN/WHERE conditions on the base-tables in this new view, it violates DRY and makes possible future refactoring difficult if that biz-rule changes.

Admittedly, most modern data-access technologies make it easier to create these “building blocks” of joined data entities.  And sometimes those biz-rules belong in the app’s lower layers, but this can lead to writing lots of little disparate queries/db-calls for what should have been one atomic operation.  That can be a maintenance headache, as could dozens (hundreds) of tailored stored-procs for every data-access scenario.

So it seems like nested views can have their place, but “deep” nesting is usually troublesome.  And to prevent the “slippery slope” effect, we have to practice diligence.

Me:

That’s pretty spot-on.  DBAs tend to criticize them (nested views) as a practice in general because of the tendency to over-use and over-rely on them, and because of that slippery slope, where “a little” use turns into “a lot”, and leads to troubleshooting headaches.  And generalizations are just that.

To take some examples in-hand: simple entity relationships, especially when biz-critical, should be A) obvious, and B) documented.  Unified views can serve this purpose, but should only be used where appropriate — i.e. to load an object that gets passed around/up the app stack.  They’re great “atomic building blocks” when you actually need the entire block of data.  But when you don’t — say in a stored-proc that’s doing some data flow operation and only needs a small subset of that data block — it’s probably better to get the relationship logic from the view and copy-paste it (but hopefully not all of it!), while omitting the stuff that’s not needed.

The main reason for this is usually index tuning.  If we’ve crafted some indexes to meet certain query patterns in certain troublesome procs, we want those procs to use those indexes, not just do a full table scan because they’re using a nested-view which does select * .

When we get to more complex business rules, we need to up our diligence game and be more mindful of dependency checking when planning for a rule change.  Proc comment-headers can be helpful here, as can tools that search thru SQL object meta-data and code-bases to produce dependency chains.

The main point is, duplication tends to be OK when it’s not functional duplication, i.e. when the SQL code is more-or-less similar in some places but it’s not exactly the same because the purpose (responsibility) of that module/stored-proc is not the same.

You’re right in that the “31-flavors of tailored procs for data-access” is a big maintenance headache, and sometimes that trumps even the performance concerns.  Again it’s about balance — we have to be mindful of both the biz-rule-maintenance concerns and the performance concerns.

Developer:

I figured.  Sometimes I see DBAs criticize developers’ work without seeming to understand that it doesn’t always come from sloppiness or laziness (although sometimes it does!).  Often, we’re trying to thread that needle of performance vs. maintainability.  In Dev-land, “lazy” is good in the sense of aiming for simplified logic, for ease of both maintenance and understanding.  Painstakingly tailoring each data-access call (stored-proc), while good for performance, is kinda opposite of that.  But, admittedly, we do fall back on SELECT * all too easily.

Mostly, we try to avoid code duplication because it leads to heavier maintenance overhead.  When some modules may perform similar operations, functionally, they will often re-use the same “core” logic, which we in turn encapsulate into its own ‘thing’.  But in SQL modules, as you say, that’s not always performant, so it’s definitely a tightrope-walk.

The “Clean Code” school of thought says, if it’s obvious, it’s “self-documenting”.  I don’t always agree with it, but it comes from maintenance concerns again.  We don’t like situations where someone tweaks the code but doesn’t update the comments, and you end up with misleading comments.  Unfortunately, it does come down to diligence again, and even “good” developers will easily fall back to rarely including comments just to avoid this situation.  Of course, another potential pitfall of supposedly self-documenting code is, what’s “obvious” to one person isn’t necessarily so to everyone else!

(We both enjoy writing, can you tell?)  =P

So basically we agreed to “moderation in all things” and exchanged Buddha statues and sang Kum-Bay-Yah.  I enjoyed the exchange because it really got us both thinking more deeply about topics and areas of our business/app environment where things were in better/worse shape than others.

developer-dba-harmony-buddha
yay collaboration!

To conclude this part.  You will continue to see DBAs rant and rail against nested views and other “sins against SQL”, but:  Developers, don’t take it personally — we’re just trying to eek the most performance-per-$3k-core-license out of our precious servers, and spend less time chasing the white rabbit down the nested-views-hole.  And DBAs, go easy on your Devs — they still outnumber you, and they can whip out a complete web-app using the hottest JavaScript framework and a cloud-of-the-month service, faster than you can tune a server.  Everybody’s valuable, and everybody works toward the same goal: solving the business’s problems thru technology.

Moving on…

Part 3: Misusing & Abusing Datatypes

Because I’m getting long-winded again, let’s wrap up with a final “Clean SQL Code” topic that’s short & sweet.

Well, not really.  There are entire presentations dedicated to this topic.  But I’ll try to keep it condensed.

A date is not a datetime is not a time​ is not a time interval.  Okay?  For the third time, stop interchanging them!  Yes I know, SQL Server is a bit behind some other RDBMS platforms when it comes to this stuff.  Sorry, I don’t work for Microsoft.  I just deal with their tech.

Deep breaths…

More to the point, know your data.  Understand that there can be consequences to repeatedly casting types, or losing precision during conversion, sometimes exponentially so.  Yes I know, we all love loosely-typed (sometimes stringly typed) languages like JS & Python.  Those are wonderful tools for certain jobs/problems.  Again, be mindful and know your flows.

flow with the chart yo
I’m not sure what’s more disturbing.. the fact that this was the first image search result for “flow meme”, or the fact that it’s actually quite appropriate.

Thanks for reading, as always!

Clean Code, the SQL

as a developer, DBA, or hybrid “DbDev”, you’re often tasked with writing or improving the stored procedures which house that complex logic.  And that’s my topic today: being clean about your SQL code.

Get it? It’s just too punny! … Ok I’m done.

The Coding Blocks guys did a series of episodes about the perennial favorite Clean Code book.  If you haven’t subscribed to their podcast…

do it do it now -Arnold
What are you waiting for?!?!

And it’s a great book, no doubt. But those guidelines for application code are not 100% directly applicable to database code.

wait... what?
Srsly?

Let’s back up a second. Why? That sounds about counter-intuitive, no?  Ok, more context. See, the traditional (“legacy”?) app consists of about 3 layers, the bottom one being the database. And it’s usually relational, and is usually responsible for far more than simple data access and persistence.  Read: complex business rules and process logic.  Data flow, not just getters and setters.

So that means, as a developer, DBA, or hybrid “DbDev”, you’re often tasked with writing or improving the stored procedures which house that complex logic.  And that’s my topic today: being clean about your SQL code.

Part 1: Comments

There’s a fairly famous quote from the book about comments:

Comments are always failures.

He’s using hyperbole, but for a purpose.  While his views on comments may be extreme, most programmers tend to realize the core essence of that chapter, which is that comments only serve to express something in plain English that the code has failed to express clearly enough to be easily and immediately understood.

With SQL scripts, and in particular with stored-procedures, I’m taking a somewhat opposite stance:

Comments are always appreciated, even if they’re potentially outdated or inaccurate.

There are two types of comments in SQL, the --inline and the /* block */.  Different people have their preferred flavors of block — sometimes it’s just several lines prefaced with the double-dash --.  And that’s fine, whatever floats your comment-boat.

hms commentus comment-boat
I made my own!! (most copied from an example at http://www.asciiworld.com/-Boats-.html)

In particular, I always encourage a comment block at the top of ever stored-proc & other user-defined programmable objects (function, types, etc).  Just a small example for illustration:

/*
Location: Server.Database
Author: NateTheDBA
Created: 2012-12-21
Description: Gets users who have not logged in since the given date.
Consumers: MyCoolAppName, MyReportServer
Revisions
2015-05-15, Nate: removed archive (never used after archive-date)
2017-06-07, Nate: fixed formatting for blog post
*/
CREATE PROCEDURE GetUsersNotLoggedInSince
    @SinceDate datetime2
BEGIN
    --some clever stuff goes here...
END

“But wait”, you say, “what about source control?”  Yes, all your programmable objects (and even, arguably, your reference data) should be in source control.  There are tool-vendors aplenty to help you with that.  But guess what?  Budgets.  Time & effort.  Oh, did I mention, legacy legacy legacy?  Yes, dear reader, the average business has years (decades) of organically evolved relational databases and processes.  Are you the guy or gal to swoop in on your unicorn and seamlessly convert their entire data tier infrastructure to a beautiful DevOps pipeline with shiny rainbows and kittens for all?  No?  Okay then.  Baby-steps.

devops-unicorn-dba-cleaning-up
Not that I’m bitter or anything…

Yes, my procs are in source control.  It’s called “daily automated script-out-objects-to-files which are then committed to SVN”.  It’s not built-in to SSMS.  Which means that I, or another DBA, or a potential consultant, or a Dev who gets enlisted to help improve a proc that runs for hours when it should only take minutes, would be inconvenienced by the extra trip to a separate tool/system to fetch some change-history just for context.  And really, that’s all this is for — CONTEXT.  We like to know what it is we’re working on when we start to work on it, without having to traverse a change-tree or go bug 3 other people who “might” have touched it last.  I’m not asking for a detailed log of every single time someone touched the thing; just give me the overview, the milestones and significant changes to functionality/features/scope so that I have a jump-off point for troubleshooting/testing/reasoning about it.

“But wait”, you say again, “shouldn’t your name be a sufficient description of what the proc does?”  Sure, in theory.  Until you have dependencies which need that name to stay the same even after an update or logic-change.  Like reports.  Or data-connected Excel workbooks.  Which are used daily by managers, who will come yelling at you if their worksheets suddenly stop functioning.

end rant

Back to comments in general.  The reason they’re helpful (besides documentation-headers for objects) is that they provide context and explain intent.  Half the time, my job as a DBA is improving or fixing someone else’s code.  Therefore, I want to see, in plain English, what it is they’re trying to accomplish, notes about attempts and failures, and the like.  Yes, I could have a discussion with them.  And I will.  But if I’m working on it asynchronously and they’ve moved on to something else, or our hours are different, I want those little nuggets of context and intent to be right there in the script, because that’s where I’m working!

What about queries that get passed-down from the app to the DB?  ORMs don’t support pre-pending a comment to their data calls, do they?  I wish.  Maybe some do, I haven’t researched it, but I know for sure that LINQ doesn’t.  But then again, when I’m using a query-capture tool (like DMVs, Profiler, X-events, or a vendor monitoring tool), ORM queries are so painfully obvious in comparison to hand-crafted SQL that I can usually spot them from a mile away, and go bother the app-devs for some context & conversation.  If you’re one of the poor unfortunate souls who still passes ad-hoc generated SQL statements down thru ODBC to your DB, then sure, a little comment won’t hurt anybody.

you poor unfortunate soul
it’s sad, but true…

So do your DBAs a favor, comment your SQL code, at least in terms of programmable database objects and ad-hoc scripts.  I promise, it’ll make them hate you less.  It might even make you love yourself more, because 3 months down the road when you revisit that proc, and you need to remember what it was for and why you did it that way, you’ll see it right there in your very own writing!  (OK, typing.)

Part 2: SRP, Encapsulation, and Abstraction

A bit of paraphrase of one of the book’s key points:

A reusable module (function, method) should do one thing, and do it well.

Also, the DRY principle:

Don’t repeat yourself.

When building SQL modules, we’re usually concerned with performance and accuracy, over abstraction and composability.  Therefore, repeating oneself is not necessarily a bad thing, when done for the right reasons.  Diligence is a big factor here — if there’s a non-trivial relationship between some entities that you’re repeating in several places, and you know that it could become a maintenance headache if that relationship’s definition has to change later, do as much as possible to mitigate the risk of dependency/consistency-loss.  This can be documentation, comments, and/or building that relationship into a view.

Make-Time-for-Due-Diligence
It’s important.

The latter brings up an interesting topic, one which I had a lively discussion about with a colleague recently (he’s a developer, and a dang good one) — nested views.  Because inevitably, the encapsulation of those relationships & business-rules into things like views or ITVF’s can and will lead to nesting those objects into other objects.  And troubleshooting many-level-nested views is a particularly frustrating exercise; in fact they’re what some DBAs call one of the “deadly sins of SQL“.  But there are perfectly valid reasons and uses for them, sometimes, and I really enjoyed the discussion thread we had on it, so I’ll have to expand on that in another post.

Anyway, I’m already getting long-winded and well over 1k words, so I’ll wrap it up for now, and continue this topic next week.

Thanks for reading, stay tuned!

Reports for Auditors

This is how code quality dies…

Look, I get it, audits and auditors are a necessary evil.  But the sheer volume of crappy code, hack-y scripts, man-hours, and alcohol, dedicated to meeting the often arbitrary, sometimes ridiculous, and possibly even downright obscene requirements, of said auditors, is staggering.  There are entire product suites dedicate to help you “report on” or “meet/pass” audits of various kinds.  And thank god for those.  Because without them, we’d all be slightly more insane than we already are.

I’m not crazy; my mother had me tested.  And my rubber duck agrees.

Here’s an example.

In one of our main applications, at the database level (SQL), we’ve implemented poor-man’s data-change-tracking via table triggers.  In a past audit, there was a “flag” on an unauthorized data change.  As a result, an “exception report” was created, which basically produces a daily CSV dump of those changes and emails it to a few managers.  Now, because the tracking table includes “who” as well as “when” & “what”, and there are certain privileged accounts that can be ignored for auditing, and because there are dozens of application users whose app-controlled changes are also logged in said tracking table (and need to be ignored), the “report” writer at the time decided it was a wonderful (read: terrible) idea to hard-code a list of usernames into the query that produces this data-dump.

That is horrible.  Never do that!

using hard-coded values is bad
Mr. Garrison knows…

This query, the job that scheduled it, the data that it produced, and the emails that it sent, were subsequently filed away and silently forgotten about.  Until we did some environment cleanup and discovered “Oh look, here’s a thing that’s not doing anything” (because the hard-coded list of users was now woefully out of date), and the same managers who received the reports for years past, gave the OK to disable the job!

What could possibly go wrong?

Then the next audit season comes around.  And guess who comes running back to the DBA having changed his mind?  Why yes, that very same manager!

i dont always say i told you so
Stay shortsighted, my friends…

So I tell him what we need to do to fix it — remove the hard-coded list, replace it with an AD group or at least a look-up table that can be periodically updated with “current” users in the desired departments/teams.  We do that, and the CSV dump goes from zero to huge.  As in, too huge for email.  So we try to pare it down.  We get it to a manageable size.  Then we get ready to deploy the changes and confirm the desired email recipients.

  • Boss: “Oh it doesn’t really matter, we just ignore the emails anyway… it’s just to satisfy an audit requirement.”
  • Me: “So why did we even bother fixing it?”
  • Boss: “Well, it was broken!”
  • Me: “…”

I’d like the last hour of my life back please.

stress-reduction-kit-bang-head-here
It works, I sw…

Here’s another fun one.

The ERP system has a rickety tack-on audit-trail system involving some SQL Agent Jobs and an obscenely large data repository for storing all the changes that the ERP system users make every hour of every day.  Thankfully, somebody along the way was prescient enough to partition those storage tables using a partitioned view scheme.  But apparently there were NOT smart enough to realize that the clustered index needs to be the datetime column.  Yes, it was in the primary key, but that PK was not clustered (nor should it have been, since the actual unique keys were GUIDs, but this is a prime example of making the clustering key different and separate from the PK.

Obviously enough, in a date-driven partitioned view scheme, that date does need to be involved in the PK also.  But again, the best clustered index is just that date, because A) it’s the partition divider, and B) it’s always your queries’ main filter-predicate (which is a fancy way of saying “the thing in your JOIN/WHERE clauses”).

Once again, audit season comes around, and someone needs those reports driven by these audit-trail repos.  And guess what?  They’re slow as molasses.

you don't say
Nicholas Cage is apparently quite the flexible meme fodder..

So we get to spend another few hours applying the correct clustered indexes, waiting for them to rebuild, testing the queries for the report, and finally running the report itself.  Oh and don’t forget the down-time of the jobs, and having to “catch up” with all the changes that were made in the ERP system while we were working on all this.

Is it 5 o’clock yet?

In conclusion.

Audits suck.  But please, stop producing crappy code in response.  It just makes the next guy/gal that comes after you even more frustrated than he/she already is with the whole process.  Practice the “boy scout principle” — leave things at least a little better than you found them.  And if you’re forced to produce a ridiculous “exception report” that nobody will ever read, spend as little time on it as possible while still making it less heinous than the last terribly designed monstrosity.

oh my look its beer-o-clock
That clock might be slightly ahead… but we’re gonna go with it.

Dates, Date-pickers, and the Devil

When a date range, or time period, is specified in SQL, it’s easiest, clearest, and most concise, to use a “greater-than-or-equal-to Period-Start, and less-than Next-Period-Start” logic. Mathematically speaking, we are defining the range as closed on the left, open on the right.

This is a bit rant-y, but… indulge me.  I’ve been writing/refactoring a lot of old reporting queries.  And, like most reports, they deal with dates and datetimes — as parameters, boundaries, or where/join predicates.  I also got way too intense with a recent SSC post (Sql Server Central), which fueled the fire even more.

fluffy-angry-puppy
I’m so cute and ANGRY!

SQL Server is very good at handling temporal datatypes and calculations against them.  We’ve got functions like dateadd, datediff, dateparts, datatypes datetime2 and datetimeoffset, datetime, etc.  It supports all sorts of format conversions if you need to display them in various ways.

..even though that should be left to the presentation layer!

Here’s the issue.  Well, there are several issues, but we only have time for a few.

Here’s the first problem

Report users don’t understand the “end of a time period” problem.  I don’t have a good name for it; others might call it the “Day plus one” problem or the “Less than date” problem.  What do I mean by this?  Well, let’s back up a bit, to DBA Commandment #6, “Thou shalt not use between with datetimes.”  In order to first understand the issue, we have to understand why this is a commandment.

When a date range, or time period, is specified in SQL, it’s easiest, clearest, and most concise, to specify it like so: @TheDate >= @StartOfPeriod and @TheDate < @StartOfNextPeriod.  Mathematically speaking, we’re defining the range as “closed on the left, open on the right”.  In other words, Min <= X < Max.

The reason we do this with datetimes is found right there in the name of the datatype — it has (or can have) a time component!

stone-tablets-with-roman-numerals-to-10
There are probably more than 10, but it’s a good starting point…

Let’s talk examples

Say you’d like to report on the month of March 2017.  How do you determine if your data-points (stored as datetime or, hopefully, datetime2) are within that period, that month?  Well sure, you could write where month(MyDateColumn) = 3 and year(myDateColumn) = 2017 

NO.  That is horrible, don’t do that.

It’s not SARGable and renders your index on that column useless.  (You do have an index on it, don’t you? No? Make one!)  Okay, let’s stick with something SARGable.  How about MyDateColumn between '20170301' and '2017-03-31T23:59:55.999'?  (You did read this post about using culture-neutral datetime literals right?)  But wait!  If your data is a datetime, it’s not actually that precise — your literal gets rounded up to 20170401 and you’re now including dates from April 1st (at midnight)!

Oh that’ll never happen… until it does.

Second problem

Many developers and report-writers assume that the values in their data will never be within the typical “1 second before midnight” or “1/300th of a second before midnight” escape window of your “3/31/2017 23:59:59.997” bounding value.  But can you guarantee that?  Didn’t think so.  Worse, if you use the .999 fraction as given in the 2nd example, you’re either “more” or “less” correct, and nobody can actually tell you which way that pendulum swings because it depends on the statistical likelihood of your data having actual literal “midnight” values vs. realistic (millisecond-y, aka “continuous”) values.  Sure, if you’re storing just a date, these things become a lot less complicated and more predictable.

But then why aren’t you storing it as an actual date, not a datetime!?

So what’s the right answer?

As I said, “greater than or equal to  ‘Start’, and less than ‘End'”, where ‘End’ is the day after the end of the period, at midnight (no later!).  Hence, MyDateColumn >= '20170301' and MyDateColumn < '20170401'.  Simple, yes?

keep calm and keep it simple
KCKS

But wait, there’s more!

I mentioned “date-pickers” in the title.  When it comes to UX, date-pickers are a sore subject, and rightly so — it’s difficult to truly “get it right”.  On a “desktop-ish” device (i.e. something with a keyboard), it may be easiest on the user to give them a simple text-box which can handle various formats and interpret them intelligently — this is what SSRS does.  But on mobile devices, you often see those “spinner” controls, which is a pain in the arse when you have to select, say, your birth date and the “Year” spinner starts at 2017.  #StopIt

I mean, I’m not that old, but spinning thru a few decades is still slower than just typing 4 digits on my keyboard — especially if your input-box is smart enough to flip my keyboard into “numeric only” mode.

Another seemingly popular date-picker UX is the “calendar control”.  Oh gawd.  It’s horrible!  Clicking thru pages and pages of months to find and click (tap?) on an itty bitty day box, only to realize “Oh crap, that was the wrong year… ok let me go back.. click, click, tap..” ad-nauseum.

stop-it-sign
#StopIt again

The point here is, use the type of date-picker that’s right for the context.  If it’s meant to be a date within a few days/weeks of today, past/future — OK, spinner or calendar is probably fine.  If it’s a birth date or something that could reasonably be several years in the past or future, just give me a damn box.  (Heck, I’ll take a series of 3 boxes, M/D/Y or Y/M/D, as long as they’re labeled and don’t break when I omit the leading-zero from a single-digit month #!)  If there’s extra pre-validation logic that “blocks out” certain dates (think bill-payer calendars or Disneyland annual-pass blackout-days), that probably needs to be a calendar too.

..just make sure it’s responsive on a mobile device.

And in all cases, pass that “ending date” to your SQL queries in a consistent, logical, sensible manner.  For reporting, where the smallest increment of a period is 1 day, that probably means automagically “adding 1 day” to their given end-date, because the end-user tends to think in those terms.  I.e. if I say “show me my bank activity from 1/1/2017 to 1/31/2017”, I really mean “through the end of the month“, i.e. the end of the day of 1/31.  So your query is going to end up wanting the end-date parameter to be 2/1/2017, because it’s using the correct & consistent “greater than or equal to start, and less than start-of-next” logic.

context-consistency-clarity
The 3 C’s

Final thoughts

I know it’s not easy to explain to business folks, and it’s not easy to implement correctly.  But it’s important.  The >= & < logic is clear, concise, and can be used consistently regardless of underlying datatype.  You just need to adjust your presentation layer (whether that’s SSRS parameters or a .NET date-picker) to convey their intent to the user, whether that’s “show/enter the last day of the month, but translate to the next day to feed to the query/proc.”, or “make them enter the next-day (day after the end of the month/period) and understand the ‘less than’ logic.”  I’m more inclined to the first, but it depends on your audience.

Thanks for reading, and happy date-ing!

Reverse Engineering SSAS Reports

MDX is not SQL. It may look like it has SELECT/FROM/WHERE clauses, but god help you if you start drawing parallels to your standard TSQL query.

This is an exercise I had to go through recently, because A) the reports in question were deployed in SSRS but used an SSAS backing, i.e. cubes, and the source queries (MDX) were not stored in source-control, and B) I don’t write MDX queries.

a basic MDX query example with labeled parts
shamelessly borrowed from an excellent article

The outline:

  1. Run Profiler or XEvents against the SSAS server
    1. set to capture “Query Begin” events only, with “Event Subtype = 0” (for MDX query)
    2. optionally, set filter on NTUserName to the dedicated SSRS account (if you have it set up that way)
  2. Run the SSRS report(s) that you want to dive into
  3. For each event in the Trace, copy-paste the MDX query to a new MDX editor window
  4. SSRS parameter substitution happens via some XML at the bottom; but in MDX, the parameters are standard @params like in T-SQL.  So we need to manually substitute our parameter values.
    1. 2 blocks of XML: the “Parameters”, and the “PropertyList” — delete the latter.
    2. In the former, text-replace &amp; for simply & .
    3. Side-bar: You’ll notice that the MDX parameters are usually inside STRTOMEMBER() or STRTOSET(), which are built-in MDX functions that do exactly what they sound like — parse a string into a dimension’s attribute’s member or set of members.  That’s why they’ll usually have at least 3 .‘s (dots) — Dimension.Attribute.&MemberValue, for example.  I’m grossly oversimplifying that because it’s beyond the scope of this post, but read the docs if you need more gritty details.
  5. For each parameter node:
    1. Copy/cut the <Value> node content (I like to ‘cut’ because it helps me keep track of which ones I’ve done already)
    2. Find-and-Replace @ParameterName with that Value node’s content, surrounded in single-quotes
    3. Example: we have parameter ​@ReportDate (in MDX), corresponding to <Parameter><Name>ReportDate</Name> in XML, with <Value xsi:type="xsd:string">[Some Dimension].[Some Attribute].[Some sub-attribute].&[2017-05-01T00:00:00]</Value> — where that last bit is a standard SQL datetime literal.
    4. So you replace @ReportDate with '[Some Dimension].[Some Attribute].[Some sub-attribute].&[2017-05-01T00:00:00]' .
  6. Delete the XML block.
  7. Boom, now you have a valid MDX query that you can run and view results.

Why do this?  Well, it can help you learn MDX from a working example, instead of from super-basic dummy examples.  That’s not always a good learning style — you should still learn the fundamentals of MDX and why it’s so very different from SQL.  Especially if you’ll be responsible for writing and maintaining more than a few MDX queries.  But, in a pinch, if you need to start somewhere, and possibly all that the MDX / overlaying report needs is a slight tweak, this may be enough to get you going.

Lessons learned:

  • Profiler can still be a useful tool, despite some people’s attempts to kill it.
  • MDX is not SQL.  It may look like it has select, from, and where clauses, but god help you if you start drawing parallels to your standard TSQL query.
  • SSRS does parameter-passing in an odd way.
  • SSAS & MDX are fascinating and I need to learn more about them!
the-more-you-know
Canadians, eh?

Quickie: SQL DB Role Members

A typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for…

Just a brief post on adding/removing users (database level users) to/from roles (database level roles).  It’s relevant because several shops are still stuck supporting at least a few 2008 (or hopefully, 2008R2) instances, and there is a key difference between those and newer (2012 & up) versions in the “preferred” method of doing this security task.

security seal for ur protection
He’s wearing a police hat, he must know what he’s doing…
There are reams of documentation and books and articles written about SQL security in general.  That is beyond the scope of this post (and indeed, beyond the scope of any single blog, unless you’re an SME on the subject!).  But a typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for, when they’re planning/deploying a new app against their SQL DB(s).

The “old” method involves calling system stored-procedures, sp_addrolemember and sp_droprolemember, in which you pass the role-name and username.  The “new” method, supported starting with SQL 2012, is to use the command-phrases ALTER ROLE [role] ADD MEMBER [user], and ALTER ROLE [role] DROP MEMBER [user].

The latter is more ‘standard‘, while the former is more ‘Microsoft-y‘.  I couldn’t easily find whether it’s part of the official ANSI standard or not… that’s an exercise for the reader.  What I find very interesting is that Azure’s data warehouse offerings require the old method.  Of course, hopefully in a DW setting you’re not messing with security nearly as much as a typical OLTP system, but… yeah.

Does that mean those Azure services are built on top of older SQL engine versions?  Possibly.  MSFT isn’t too open about the deep internals of such tech, but neither is any other cloud vendor, so we can’t really ask them such a question and expect anything more than a blank-stare.  But it is curious, no?

fry not sure if curious or suspicious
Exactly.
Syntax examples:  Let’s add the user foo to the database Bard, in the db_datareader built-in role.  Then we’ll remove him.  (Or her, I guess; “foo” is a pretty gender-neutral name.)  Creating said user is easy, so I’ll start with that, and it’s the same in all supported versions.  You need a server-level login to link it to; if you don’t have one, I’ll show you how to create it first.

Create server-level login:

--preferably, you create a login for an existing AD/Windows account:
CREATE LOGIN [yourdomain\foo] FROM WINDOWS;
--or, you can just create a SQL login (not connected to domain/Windows/ActiveDirectory; also less secure, as discussed here and here)
CREATE LOGIN [foo] WITH PASSWORD = 'foobar';

Create database-level user:

USE Bard;
--if you made the domain/Windows login:
CREATE USER [foo] FOR LOGIN [yourdomain\foo];
--or, if you just made the SQL login:
CREATE USER [foo] FOR LOGIN [foo];

Now the role-membership.

Old way:

  1. Add user to role:
    • exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';
  2. Check that it worked:
    • exec Bard.sys.sp_helprolemember
          @rolename = 'db_datareader'
    • It will show something like this:
      db_datareader with member 'foo'
  3. Remove user from role:
    1. exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';

New way (step 2, the “check”, is the same)

  1. Add user to role:
    • USE Bard;
      ALTER ROLE db_datareader ADD MEMBER [foo];
  2. Check (see above)
  3. Remove user from role:
    • USE Bard;
      ALTER ROLE db_datareader DROP MEMBER [foo];

Yay.

Notice that, because the “old way” is simply executing sys-sp’s, we can actually run it from any database context.  Whereas the “new way” requires you to connect to the database in question.

Note: I am in no way shape or form responsible for you screwing up your database or SQL instance, nor for you getting yelled at by your DBA or security admin or any other form of verbal assault you may incur as a result of running these commands.  But since you need server-admin & database-owner equivalent permissions anyway, you’re probably one of those people already, so you’ll just end up yelling at yourself.

no guarantees
No substitutions, exchanges, or refunds.
Cleanup (just so you don’t muddy your instance/DB up with a silly example user):

USE Bard;
DROP USER [foo];
USE master;
DROP LOGIN [foo];

If you have any questions, feel free to reach out to me!

 

Little Gotchas

If the caller of our stored-procedure literally passes NULL as the parameter value, we might have a problem!

A large part of most DBA/DBD’s daily job is writing & maintaining stored-procedures.  In SQL Server or other RDBMSs, stored-procs (“SP’s”, “procs”, however you like to abbreviate), serve as one of the building-blocks of your overlaying applications and day-to-day operations, including maintenance and automation.

sprocket
This is a sprocket, not to be confused with a sproc, which is really just a proc.

Today, something struck me, and I was both shocked and comforted by the fact that this hadn’t really “come back to bite me in the arse“, as the proverbial saying goes.  But first, some context.

When we declare our proc signature with our parameters, we of course give them datatypes, and often default values — the parameter value that is assumed & used upon execution when the caller (operator, application, agent job, etc.) calls said proc without passing a value to that parameter.  So we create our proc like so:

CREATE PROCEDURE dbo.MyProc @MyParam BIT = 0 AS BEGIN SET NOCOUNT ON; END

So that users are allowed to call it like so, and assume some correct default behavior:

EXEC dbo.MyProc;

Coincidentally, that CREATE line is part of a typical “boilerplate” snippet or template which I use to create procs with “create if not exists, else alter” logic and a nice header-comment-block, which I’ll publish on my GitHub or Gist shortly, so I can show it here.  I know that MS recently added DROP IF EXISTS support to the language, but frankly, I like to keep procs intact if they exist because it’s easier not to have to remember/re-apply their metadata, such as security (grants/deny’s, certificate signatures, etc.) and extended properties.  Wake me up when they add true CREATE OR ALTER syntax!  Oh snap, they did… in 2016 SP1.  Anyway.

Now for the “catch”, the gotcha.

gotcha-programming-wikipedia-def
In programming/software-dev/IT-systems, “gotcha” has a specific meaning.  Thanks Wikipedia!

If the caller says exec dbo.MyProc, that’s great — they didn’t pass a parameter value, so the execution uses the default value (0) and off we go.  However, if the caller is so malicious as to literally pass NULL, we might have a problem!  Because let’s say that @MyParam value is used in a JOIN predicate or a IN (SELECT...) block, or even a CASE expression.  We won’t get an actual error; SQL Server is smart enough to jump over the syntactical variations required for equivalence checking (i.e. Column1 = 0 vs. Column1 is NULL) when it interprets/compiles the stored-procedure.  But, what we’re probably going to get is unexpected or unknown behavior.

warning-assumptions-ahead

It seemed worth re-using a classic…

And really, it all comes back to those nasty things called assumptions.  See, as the proc author, we’re assuming that our @MyParam will always be a 0 or 1, because it’s a BIT, and we gave it a default value, right?  Sure, maybe in another language, but this is T-SQL!  NULL is a separate and distinct thing, a valid value for any datatype, and must be accounted for and treated as such.  It can get especially dicey when you have a NOT IN (SELECT...) block that ends up as an empty-set, which suddenly morphs the outer query into a “without a WHERE clause” beast, and.. well, you can guess the rest.

So what do we do about it?  Well, we can add a “check parameter values” block to the top of our procedure where we either throw an error, or set the NULL value back to a default.

Examples:

IF (@MyParam IS NULL) RAISERROR ('@MyParam cannot be NULL; try again.', 15, 1);
IF (@MyParam IS NULL) SET @MyParam = 0;

We could also work on the internal proc logic to account for NULL values and “eliminate the guesswork” (i.e. prevent unexpected behavior) by actually having logical branches/conditions which “do something” if the parameter is NULL.  Then, at least we know what our proc will do if that infamous caller does exec MyProc @MyParam = NULL.  Yay!  But that sounds like a lot of work.  Maybe.

Or maybe it’s worthwhile because you actually want NULL to be treated differently than all other parameter values, and then, hey, you’ve already spent the time on that logic, so you’re done!

what-if-i-told-you-null-does-not-equal-null
But NULL does not NOT equal NULL, either!  Crap, somebody give me the red pill…

I hope this helps somebody else avoid the same assumptions.