Follow-up: Cribbage “15’s Counter”

The actual method involves joining 5 copies of the table together, by each right-side table only including cards with higher ID values than the table to its left.

Advertisements

To be honest, my T-SQL Tuesday puzzle was a bit of a last-minute idea, which is why I didn’t have a solution ready-made. But, dear reader, you’re in luck! I have one now.

The code is over here in Gist. You can read thru it, but since the final query — the actual “answer” — is kinda ugly, let me explain my thought process.

Modeling is Important

Even when I’m putting together a silly little demo script like this, I feel that good habits and fundamentals are important. You never know what future developer might read it, copy-paste it, and say to themselves “Cool, I’m gonna follow this example when I do this other thing over here!” So you’ll see my formatting preferences, naming convention (though I must admit, I argued with myself over whether to pluralize the table names or not!), and correctly allocated Primary Keys. And since we’re modeling a card deck, even though I didn’t need to store the ‘NumValue’ (which is what you’d use for a straight/run, where the Jack is 11, Queen is 12, etc.), I did anyway.

Now, when we set up our “Hands”, we’re going to use two ‘PlayerNum’s, just so we can test two different hands at the same time. Cribbage can be played with 3 or 4 players, but we’re keeping this simple. Also, I could have built the hands more aesthetically, i.e. by selecting from Cards using PtValue and Suit, but again, I was trying to script quickly, so I just used the IDs that I knew from the previous query (the “full deck”). And again, there’s a “little extra” tidbit, the ‘IsCut’ indicator — we won’t be using that right now. If you’re still not sure what that means, go read the rules.

The Method

At the end of the original post, I mentioned loops and cursors as possible routes to a solution. That may still be true, but I decided to challenge myself to avoid them. Not because they’re “always bad”, as popular media would have you believe; they’re just often an indicator that a developer isn’t thinking in set-theory when they probably should be.

Let’s start with some basic principles. You have 5 cards in your hand. It takes a minimum of two cards to make 15 (examples include Jack+5, 6+9, etc.), and up to a maximum of.. you guessed it, five cards. So we need to check all combinations of any two, three, four, or five cards. We cannot re-use a card within the same combination; and putting the same three cards in a different order, for example, does NOT count as a separate combo (another ’15’).

So as you start to think about these rules, and if you’ve been around data for a while, especially data with identity values, you might have a little light-bulb. “Aha! I know how to do that. We can simply order the combos by the ID value, and that way we won’t allow duplicates!” And that’s kinda what I did, by enforcing the JOIN predicates that every subsequent derived-table have a ‘CardID’ greater than the prior one. But I’m getting ahead of myself.

The actual method here involves JOINing 5 copies of the table together, mainly just on PlayerNum, but also, as I said, by each right-side table only including cards with higher ID values than the left-side. In this way, we ensure that we’re not allowing the same cards to be “joined” to each other, i.e. we’re removing them from the right-side tables.

And finally, we have four OR‘d conditions: simply “do any of those combinations add up to 15, by the Card’s PtValue?” These are echo’d in the CASE-expression in the SELECT line, where we want to essentially “show the combo”, i.e. tell you what cards make up the ’15’. (Again, for style’s sake, we have an ELSE, but we don’t really need it because it’ll never actually happen.)

Now, it does look kinda ugly. It’s not very extensible — meaning, if you wanted to scale it up to find the ’15’s in a 6- or 7-card hand, or you wanted to look for other kinds of combos (like ’18’s or ’27’s), you’d end up re-writing a good portion of it, or at least copy-pasting a lot. Fortunately for us, Cribbage is fairly simple in this regard — your hand is always the same size, and you only ever care about ’15’s.

(Well, and pairs, 3- and 4-of-a-kinds, straights, flushes, knobs, etc., but again, read the rules if you’re curious. We kept this very simple by limiting ourselves to just one small fraction of the game mechanics.)

The cool thing about this sample, though, at least to me, is that you’re already set up to build on it if you want to try out other Cribbage mechanics. Or even other card games, if you just use the base Suits & Cards.

What Did We Learn?

What’s the point of a puzzle like this? Well, besides introducing you to a fantastic card game, if you didn’t already know about it. The point is to make your brain think in a different way than usual. Are any of us programming card games using a SQL back-end? Probably not. (Although an in-memory equivalent like SQLite or something might be viable!) But the next time you have a “combinations problem” with some real-world data, you might wonder if a method like this could come in handy. Or at least, if it could work out better than a double-nested-loop. =)

PS: I believe, instead of the LEFT JOIN​s, we could have used OUTER APPLYs. We’d move the conditions from the JOINs into the inner WHERE clause of each derived table, i.e. “this ID > previous ID” and “PlayerNums are equal”. If you’re curious, try it out!

T-SQL Tuesday #114: A Puzzle

One of the main things a new cribbage player needs to learn is how to easily spot the combos that make ‘a 15’ (the ways to combine cards to add up to a numeric value of 15). Let’s do that with SQL!

It’s that time again! The 2nd Tuesday of the month, T-SQL Tuesday. This month’s invitation is on the lighter side, which is nice, and it comes from Matthew McGiffen (b | t). The theme is “Puzzle Party!” And I’m going to cheat, since it’s getting horribly late already and I’m lacking in inspiration.

So, I propose a puzzle! Which you must solve using SQL. Then I’ll post my own solution in a day or two. Bwahahaha.

I actually really wanted to do a Sudoku solver, but @SQLRnnr beat me to it. By a few years. =P   I might still work on that when I’m bored, just to have a standby for another blog post. Maybe we’ll compare notes.

But for now…

Do You Even Cribbage, Bro?

If you’ve never heard of the card game cribbage, it might sound weird. When you read the rules, it sounds even weirder. Legend has it that it was invented by drunk Englishmen in a pub. Reality is actually not that far off. It’s also heavily played by Navy submariners, and that’s how it was passed down in my family.

There are already many great mobile & web versions of the game, and it will quickly become obvious to anyone who’s tried to program a card game before, that a query language like T-SQL is NOT suited (omg see what I did there?) to the task. However, we can probably come up with a small sub-task of the game that’s acceptable for our purposes.

Enter: the hand scorer. There’s a nice example of a finished product here. The input would be a set of 5 ‘cards’ — the ‘hand’ has 4, and the ‘cut’ adds 1 more, used as part of each player’s hand in scoring (like community property). A ‘card’ is simply an alphanumeric value — 1-10 plus JQK (which are ‘worth’ 10 for arithmetic, but can be used like normal for ‘straights’ aka ‘runs’) — and a ‘suit’ (heart, spade, diamond, club). Think for a moment on how you’d store that as a data structure.

The output, then, is a single numeric value, the ‘score’. But how do you score? You look for the following: combinations of any numeric values that add up to 15; pairs, 3-of-a-kinds, or 4-of-a-kinds; straights (suit does not matter); a flush, if all 4 ‘hand’ cards are the same suit (and a bonus point if the ‘cut’ card matches as well). And then there’s a funky thing where you get an extra point if you have a Jack that matches the suite of the ‘cut’ card. o_@

Dude… What?

Wow, that sounds complicated, no? Let’s make it simpler. One of the main things a new cribbage player needs to learn is how to easily spot the combos that make ‘a 15′ (the ways to combine cards to add up to a numeric value of 15). For each ’15’ you make, you score 2 points. That sounds pretty feasible in SQL, right?

For starters, we don’t really care about suit anymore. But we do need some way to distinguish the cards from each other. This is a single-deck game, so you’re never going to have more than 4 of the same number; never more than one of the same card (like the Ace of Spaces). And when you’re counting combinations (or is it permutations?), you can’t use the same card twice. So let’s still use the suits for card distinction; I’ll just suffix the number with an ‘h’, ‘s’, ‘d’, or ‘c’.

We also don’t care about differentiating a 10 or J/Q/K, since they’re all just worth 10, numerically. So your ‘input’ can just consist of five numbers between 1 and 10. Cool? Just find the ’15’s!

Example:

  • Your hand is 3h, 6s, 6d, 9c, and the ‘cut’ is 3c.
  • Combos for ’15’: 6s+9c, 6d+9c, 3h+3c+9c, 3h+6s+6d, 3c+6s+6d.

That’s five unique combos, for a total of 10 points! Good job, that’s a bit better than average hand. In cribbage lingo, you’d say it like so: “fifteen two, fifteen four, fifteen six, fifteen eight, and fifteen ten.” Or if you’re playing with more experience, you’d abbreviate to simply “two four six eight ten”.

In “normal” programming land, we’d probably use a loop and some branching logic. What will we do in SQL? A loop, a cursor, or something more (or less!) elegant? You decide!

I’ll come up with something solution-y soon. Enjoy! ❤

cribbage board close-up of winning peg and partial hand
Red won by 2 points! Close game.

SQL Style Guide – a Rebuttal

Both the original article, and my responses, are guidelines, meant to help you start a conversation with your own team about how you can get better at writing code.

In what will surely be a controversial post, I give my take on some of the major points of this “SQL style guide” that made the rounds on Hacker News / Reddit recently a few years ago.  Hey, I never claimed to be a source of breaking news.

stamp of controversy
Feels so empty, without me…

Now remember kids, these are opinions — everyone’s entitled to theirs, and it doesn’t mean you’re right or wrong.  As the author repeatedly points out, consistency among a team / project / environment is more important than anything else.  Both the original article, and my responses, are guidelines, written by people with some experience (guided by others with more experience) to help you start a conversation with your own team about how you can get better at writing code.  Because that’s what we’re paid to do, among other things.

Basics

I agree with most of the points here:

  • Consistent descriptive names (I would add “concise” too — autocomplete/intellisense has come a long way, but we’re still typing stuff sometimes)
  • White space & indentation
  • Datetime literals – hail Saint Bertrand
  • Comments – you, dear reader, know my thoughts already
  • Some OOP principles should be avoided because they usually lead to poor performance
  • Hungarian notation is best left in the ’80s where it belongs

Something I don’t abide by:

  • Sticking to the ‘standard’ and void vendor-specific functions/features

Some brief justification for my rejection:

Database code portability is largely a myth — it sounds great, in theory, but most teams/projects never actually do it.  If your data persistence layer is ever slated for migration to a new tech-stack, it’s going to require a massive overhaul anyway, in which those vendor-specific functions/code-bits will the least of your worries.  More likely, you’ll be swapping-out pieces of the data layer in small chunks to move into other storage tech, like a NoSQL store or a DocumentDB or something; and eventually over time, the whole original data layer will have been moved, and the concern over SQL code portability will be moot.

Furthermore, database vendors give you these features/functions for a reason — they’ve found that it greatly enhances their product and the productivity of developers who work with it.  So why not take advantage?

Finally, if your application is ‘cutting-edge’ enough that ALL db access is done via ORM or some kind of repository layer in the code-base… guess what?  You don’t have this problem in the first place!  Because the database is a dumb state storage mechanism, containing little to no actual code whatsoever (storec procs, functions, etc.).  So, port away!

now that's what i call edgy
I can almost feel the edgy-ness…

Other basic issues:

  • CamelCase (actually TitleCase) is pretty standard in a lot of DB schemas, and I see nothing wrong with it.  Despite my love of underscores (snake_case) , it does make for more awkward typing.
  • Plural or singular entity names should match the convention preferred by your overlaying ORM, if you’re at the point of DB design; most of the time, though, you’re working with a DB that you’ve inherited and you have no control over entity naming anyway, so stop whining about it and get on with life.
  • Leading vs. trailing commas: I prefer leading, but the arguments against it can sound convincing (not just his, but in general in the tech community) — my experience leans toward being more likely to futz with the middle-to-end of a list than the beginning (1st item), thus making the leading commas more likely to help, but that’s just me. Also, thanks to an awesome member of the Coding Blocks Slack, a point in my favor is that source code comparison (diff) tools will only show the one changed line instead of two, if you’ve had to add to the end of the column list.

Naming Conventions

Yes, please avoid reserved keywords, replace spaces with underscores (or use TitleCase to avoid having spaces), and use concise yet meaningful table aliases when you’re writing queries.  I still remember, when I first started working at my current company, literally gasping and cursing under my breath when I found that some databases actually had a space in the name.

Beyond that, the article goes a bit too deep in the weeds for me, especially the whole “known suffixes” thing — because isn’t that just Hungarian notation on the other end?  How about names that make it intuitive, such as IsActive for a bit flag, or LineNumber or RecordSequence for a sequential integer that’s not auto-generated (not an identity value), or @NumMonths as a parameter for a stored-proc that indicates how many months of reporting to fetch?  Common sense should rule the day, not arcane prefix/suffix conventions that will never be documented or enforced.

White Space

This whole notion of a “river” feels strange and awkward.  It’s made worse by the fact that some clause’s keywords are “too large” for the “standard” river width (which is the width of the SELECT keyword, ish), such as group by and left join).  Plus, I’ve yet to see truly excellent tooling support for this kind of style (be it VSCode, Visual Studio, SSMS, SQL Prompt, or other styling tools / auto-formatters).  Given that I still largely write my code without continuous automatic re-style-on-the-fly styling assistance, I find this hard to digest.

Side-bar: big kudos to the author for pointing me at this typography article, which challenged my long-ingrained writing preference of double-spacing between sentences.  Even now, I do it while I write this post, knowing it’s wrong — I can’t help myself!

For similar reasons, JOINs and sub-queries don’t need to be “on the other side of the river” — since I’m actually saying “there is no river”, what I mean is, don’t indent those JOINs or sub-queries excessively.  In the FROM clause, the JOINed tables are just as important as the first one, so I don’t see the need to reduce their importance by putting them so far to the right.  And please for the love of all things holy, stop putting the JOIN predicates in-line (on the same line) after the joined table — put the ON conditions to their own line and indent it!

Sub-queries are a strange animal, and I won’t dive deep on it here.  Basically, I try to style them the same as I would normally, just indented to the context of their scope; and within Javascript-style parentheses — meaning, open-paren on the preceding line (or its own line), then the body, then close-paren on its own line to finish.

Special note about the “Preferred formalities” section

BETWEEN is mostly evil.  I’m not saying you should never use it; just be very clear about why you’re using it, and only use it with discrete valued types (DATE, INT), NOT with continuous (or conceptually/nearly-continuous) value types (DATETIME, REAL/NUMERIC/DECIMAL).

The UNION operator is often misused, usually because UNION ALL is preferred (and is what you really meant anyway), but a blanket statement to “avoid it” misses the point of why it exists in the first place.  Likewise, temporary tables (#temptables) are wonderful tools when used properly and in moderation, but flagrant overuse can lead to what I call #tempocalypse (which means you’re hammering your TempDB so hard that its underlying storage system screams for mercy).

cry for help cheeto puffs
Like Cheeto Puffs, your TempDB has become bloated and full of cheeze.

Misnamed section “Create syntax”

What he really does here is expound upon table and database design principles.  This does not belong in a “Style Guide”; it probably belongs in a “Design Guide”, but because the relational database as a technology is so mature (yes, that means ‘old’) by this point, most of this feels completely unnecessary and redundant.  And again, you’re working with inherited designs over 90% of the time, where you don’t get to make these decisions, because they were made for you by your predecessors.  If you are so lucky as to be designing a relational model from scratch, look for advice from the tried-and-true architects of the trade.

I do echo and reiterate his advice to look at other data storage tech for things that are not ideally served by an RDBMS, such as EAV models, document storage, key-value storage, text search, etc.  There is no shortage of NoSQL tech to meet those needs, and they will do a much better job of it than you or I could in designing a hacked-up SQL model that somewhat does the job but falls over flat when faced with scaling up to a larger size or heavier workload.

In Conclusion

As I said at the beginning, these are opinions.  I applaud the author’s effort, and the fact that he actually got something together, made it open-source on GitHub, asked for feedback, and actually got traction with several communities discussing it.  That’s more than I can say for myself at this point!

I hope that this spurs some discussion on your database development team, and perhaps helps you work toward making a consistent style guide for your own environment.  Even if it’s just water-cooler talk, it’s always good to remember why we write code: so that others can read it.  (Sure, it’s also to make stuff happen, i.e. “make the computer do things”, but that’s a side-effect — your main goal should always be readability and clarity.)

Do you have some comments, thoughts, disagree? Leave me a comment! I’d love to hear from you.  🙂

needs-moar-drama
Drama-Cat is bored…

Quickie: Read & Search the SQL Error Log in AWS RDS

Go check out your AWS RDS SQL Error Logs!  See what tickles your curiosity.

Today’s post is brought to you by an unplanned AWS RDS outage, the desire to search its SQL Error Logs for events during the target time-frame, and the horrendously inefficient interface which AWS gives you in the GUI console for reading said logs.

it's log from blammo (ren & stimpy)
A coworker of mine used to sing this a lot. Now I know what it’s actually from.

Short Outline

  1. Even the dedicated ‘admin’ user that you create for your instance, doesn’t have permission to read the error logs via the SSMS GUI nor with xp_readerrorlog.  You can, however, use sp_readerrorlog.  That’s with an ‘S‘.
  2. The parameters here are quite arcane, namely@1, @2, @3, @4.  Unfortunately, none of them allow you to filter on a time-span (those are the 5th and 6th parameters of the underlying xp, which we don’t have access to, as per #1).
  3. My solution involves a #temptable, a loop of insert / exec commands, and then whatever queries you desire to search/filter/analyze the results.  I also throw in a conversion to local time (from UTC, which is what the AWS servers use).

Details and The Why

You can check out the script; it’s short enough to embed, IMHO, so I’ll just leave it right here.

Line 25-26 is particularly interesting to me, and only works with SQL 2016 and up.  I got the idea from this StackOverflow answer.  You can chain two AT TIME ZONE commands together to convert a given datetime value from one zone to another.  The reason you still need the CONVERT is because the output of the AT TIME ZONE command is always a datetimeoffset type, which, while quite useful in its own right, has its quirks, and doesn’t serve our purposes for ease-of-readability.

If you’re not running 2016, at least in RDS, you’ve got nearly no excuse.  The upgrade process is vastly simpler with RDS than with traditional on-prem servers.  Although, I did run into my share of snags with it recently, which I’ll blog about later.

You should plug in whatever values suit your need & environment — the @NumLogFiles and @StartDate & @EndDate.  I used 2-2:30am, because… well, that’s always when those damn outages seem to happen, ain’t it?

2am friday alert that load on aws rds server is critical
I’ve never actually had a pager, but I hear awful things. =P

As I mentioned, “the Why” is basically because AWS RDS limits your permissions (even as an admin) in some key ways, and one of those limitations prevents you from reading the error logs in the more “normal” ways — SSMS GUI, xp_readerrorlog, etc.  And the interface given to read the logs in the AWS console GUI is quite a sad-panda.  They offer a wrapper proc rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1 , which really only serves the purpose of naming the parameters more nicely; under the hood it’s doing the exact same thing.

Of course we can’t prove that, because it’s encrypted, but the results are identical.

So there you have it.  Go check out your AWS RDS SQL Error Logs!  See what tickles your curiosity.  =)

T-SQL Tuesday #104: Code You’d Hate to Live Without

And that’s where we could use a little encouragement, I think — another DBA saying “Yay, it’s not just me!” makes it all worthwhile.

It’s that time of the month again!  Bert‘s fantastic invitation complete with YouTube vid is tempting indeed.  There are so many wonderful community scripts and tools for SQL Server DBAs.  But, in an interesting twist, he specifically asks us about something self-written or self-created.  And I’ll admit, I sometimes have trouble being exceptionally ‘proud’ of my home-brewed stuff.  Sure, I’ve blogged and GitHub‘d, and hopefully those have helped someone along the way.  Most of the time, though, those are specifically suited to a given use-case.

The more difficult code to share, I think, is the stuff that we keep in our “daily-grind” “Get-Stuff-Done” folders.  Scripts that we’ve tweaked and commented sporadically throughout the years, finding this edge-case or that tweak for a given scenario, most of which ends up being pretty environment-specific.  And that’s where we could use a little encouragement, I think — another DBA saying “Hey, it’s not just me!”, or “I knew there was another guy/gal out there who always has to do this kind of thing!”.  Thus, here I will attempt to show off something along those lines and see what you kind folks think.

Where Does it Hurt?

No, I don’t mean the famous wait-stats queries by Paul Randal (those are awesome).  I mean, what are the top few areas of your SQL environment where you’re always questioning something, or being asked to find some answer that’s not immediately obvious?  For me, there are 3.

Replication

Transactional replication is, in a word, ‘brittle’.  It works well when it’s working, and usually you don’t have to think about it.  But when you do need to check on it, or change anything about it (god forbid), or troubleshoot it (more like shoot it, amirite?), it feels a bit like trying to mess with a half-played Jenga stack.  Meaning, you might be just fine, but you might send the whole thing crashing down and have to rebuild it all.

peanut-brittle-from-the-joy-of-baking
As brittle as this stuff. But not nearly as delicious.

I won’t go into the whole troubleshooting aspect here, that’s too much scope.  But there’s a question that can often come up, especially from Devs or Biz-Analysts, and that is: “Hey, is TableX being replicated?”  And rather than subject my poor eyeballs to the replication properties GUI, I just run a little query, which reads from the distribution database and the actual database that’s being published (the ‘publisher’ db), and tells me a list of tables & columns that are being replicated.

Here it is.  Be sure to replace [dbName] with your actual published DB name.  Like StackOverflow or WideWorldImporters, or AdventureWorks <shudder>.

Report Subscriptions (SSRS)

Another question I’m often asked is, “Hey, did ReportX run?”  What they really mean is, “Did ReportX‘s email subscription get sent to BigWigUserY?”  We have an unholy amount of SSRS reports with email subscriptions.  And because I don’t care to bloat my inbox by BCC’ing myself with every single one, I rely on the users to speak up when they don’t receive something they’re expecting.

“This is a terrible idea.”, you say.  “Never trust the users!”

Yes, well, such is life.  If you have a better lazier solution I’m all ears.

So here’s a little script I wrote to query the ReportServer database (which MS will tell you is “officially unsupported”, snore).  If I know some keyword from the report title, or the supposed recipient’s email address, this will tell me if it ran successfully or not — LastRun, LastStatus.  A couple other useful bits: where it lives, ReportPath, and what its SQL Agent Job’s Name is, JobName.

That last bit is peculiar.  The JobName looks like a GUID (because it is, because SSRS just loves GUIDs), but it’s also the actual name of the agent job, which you can use to re-run said job — via exec msdb.dbo.sp_start_job — if the failure wasn’t systemic.  As I often do.

Disk Space

Last but not least, everybody’s favorite topic to try and forget about or pawn-off to the SysAdmins.  “How much space are those databases / data files / log files eating up?”  Well, mister suddenly-cares-about-disk-space-but-is-OK-with-storing-all-domain-users’-iTunes-music-libraries-on-the-central-fileshare-along-with-their-documents-because-that’s-what-we’ve-always-done.  {True story.}  Let me tell you.

keep calm and release the bitter
It’s healthy, I swear!

This script has a lot of comments and commented-out lines because I will tweak it depending on what I need to see.  Sometimes it’s “show me the DBs where the logical filename doesn’t follow my preferred pattern” (the DB name with ‘_data’ or ‘_log’ after it); or perhaps “Only show me files over 5GB with a lot of free space” when I’m itching to shrink something.

“Never shrink your files!  (In production!)”

Yes, thank you, knee-jerk reactionary.  I’m dealing with servers in lower environments, usually, with this one.  😉

What do you think?

I’d love to hear your feedback in the comments!  Happy Tuesday!!  =)

Credit where credit is due.

I did not magically come up with these all by myself.  They’re pieced together from many a StackOverflow answer and/or other community blog post or contribution that I’ve since forgotten.  I usually store a link to the source in these kind of things, when it’s true copy-pasta, but in these cases, I added enough of my own tweaks & style that I no longer tracked the original linkage.  If you see anything that looks familiar, please do tell me where to give kudos and shout-outs!  😀

Quickie: Use of APPLY Operator

There are many great use-cases for the T-SQL APPLY operator, and if you haven’t tried it yet, I encourage you to check out Kevin Feasel’s excellent presentation on it here.  Today, I want to demonstrate a particularly interesting use-case (at least, to me!) that I found for it while tackling a real business problem — for reporting, specifically, which is what I spend a lot of my time dealing with lately.

The Setup

We have a “history of estimated market values” table, which stores, for some arbitrary dates based largely on user input, values (prices) of our proverbial Widgets.  In other words, we can’t guarantee we have every date in there for a given time period.  Over on side B, we have a “sales” table, which tells us when a given Widget was sold and for how much.  The business question, simply enough, is “When a given Widget was sold, how accurate was its ‘estimated market value’ at the time?”  Or, put another way, “How closely did the sale price of that Widget match our estimated market value?”

we don't make widgets
Yeah, yeah, we’re all tired of the Widget business. If only we could come up with a better term for a generic product to use in discussions of business problem samples.

The Tools

I used two interesting bits of TSQL to accomplish this, the main one being our lovely APPLY operator.  OUTER APPLY, in this case, because I still wanted the rows from “sales” even if we couldn’t find a closely-matching “market value” record.

If you haven’t used this operator yet, think of it conceptually like a JOIN to a sub-query, but with the added value that you can reference the outer tables’ columns within it.  CROSS APPLY is like INNER JOIN, and OUTER APPLY is like LEFT JOIN.

Sorry, RIGHT JOIN fans, no love for you here.  You’re just too weird.

My other little trick involved using TOP(1) in the apply’d sub-query, with a non-standard ORDER BY clause.  As I mentioned, within the APPLY‘s body, you can reference the outer table’s columns.  So I’m easily able to compare the SaleDate (outer table) with the EstimateDate (inner query).  I want “the closest EstimateDate to the SaleDate​”, which means I want the row where the difference between those two dates is the smallest.  Which means making use of our friend DATEDIFF.  So let’s try:

ORDER BY DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)

Do you see the problem yet?  If I get a negative value, i.e. my SaleDate is way before my EstimateDate, say -100 days, that’s the top 1 row.  I don’t want that!  Let’s try our old friend from many many math classes ago, Mr. Absolute Value.

ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)

That’s better!  Now I have the top 1 “closest to my sale date” row from MktValueHist.

All Together Now

I’ll put up a Gist with repro/demo code soon.  Here’s the meat of it, as a sample select, for convenience:

SELECT Sales.WidgetID, Sales.WidgetName, Sales.Price, Sales.SaleDate
, mvh.MarketValue, mvh.EstimateDate
, Accuracy = some_made_up_mathy_thing_here
FROM Sales
OUTER APPLY (
SELECT TOP(1) mValHist.MarketValue, mValHist.EstimateDate
FROM MarketValueHistory mValHist
WHERE mValHist.WidgetID = Sales.WidgetID
ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, mValHist.EstimateDate)
) mvh
WHERE Sales.SaleDate >= '20180101'

There, my completely fabricated yet totally-based-in-reality example of how to get 2018’s Widget Sale Prices with corresponding “closest to the sale-date” Market Value Estimate from our history table.  You could even throw in some fancy math expression for “accuracy”, if you felt like it.  Like maybe “relative difference“.  =)

Note: For simplicity, we’re dealing with “whole dates”, i.e. just date.  Not datetime or datetime2.  If your date values do include times, you’d want to change the datediff interval to something more appropriate, like second or millisecond.

cross apply explained vs inner join and correlated-subquery
Forgive the graininess, it was from a Youtube screen-cap.  In an Oracle presentation, of all things. And I feel like they meant to say “Data exposition“, not “explosion”, given the latter can have negative connotation.  But.. hey, it’s better than a poorly drawn stick-figure or an inappropriate meme, right?  RIGHT?

And that’s how APPLY and ABS() saved my bacon today.  Hooray!

Have an interesting use-case for APPLY?  Or perhaps an alternative approach to a similar problem?  I’d love to hear from you!  Drop me a comment, a tweet, or a LinkedIn msg.  Happy summer!


PS: I’d also recommend checking out Steve Stedman’s useful “Join types poster“, if for no other reason than to have something concrete to use when explaining those concepts to your fellow cube-dwellers.  It has cross/outer apply on the 2nd page in their more commonly used pattern, the TVF (table valued function) relationship.

PPS: Fine, have your meme…

apply yourself, from breaking bad
I’ve never seen the show, so I don’t get it, but it was one of the first Google Image results for “apply yourself meme”. Enjoy that.

A SQL “Whodunnit” Trigger

Triggers aren’t bad, if used for the right reasons.. Here we look at an “audit-trail” use-case.

Inspired by a brief conversation in the #CodingBlocks community Slack: A short discussion and example of a “who-dunnit” (“who done it”, a colloquialism for a murder-mystery type thing) trigger, to find how what user is doing deletions against a certain table.

the cast of CSI LV
The original, or nothing.

The Background Check

Let’s name our hypothetical database CSI.  In it, we have a table, dbo.Victims, where it seems like data is being randomly deleted at random times.  As we all know, this is impossible — computers never do anything truly randomly, much less RDBMSes.

Insert witty counter-example here.  You know you have one.

So we want to find out who’s doing these deletions.  One DBA says, “Hey I got an idea… Let’s put an after delete trigger on the table!”  Another DBA says “I abhor triggers; let’s log sp_WhoIsActive every 5 seconds to try to catch the suspect ‘in-the-act’.”

Both approaches have their merits, and neither is that uncommon.  However, the latter is much more regularly blogged about, so I’m going to present the former, because it kinda helped remind me of a few things that I hadn’t used in a while.  I’d also argue that the latter is much less of a “guaranteed capture”, since you’re gambling pretty liberally on the fact that the delete transaction will even last that long; it’s statistically more likely that you’ll miss it.

The Setup

Here’s a SQL snippet that shows a basic after delete trigger created on our dbo.Victims table.  Notice the use of the special Deleted table reference — this is a “temporary, memory-resident” table according to the Docs, and it holds all the records that were/are-about-to-be deleted from the target table.

I feel like it used be called a “temporal table”, but that now refers to a new feature in 2016, where SQL keeps a hidden history-tracking copy of your table that you can reference like a time-machine; which, incidentally, almost* negates the need for such things as these triggers we’re talking about, but that’s another topic for another time.

*(The ‘almost’ is because temporal tables don’t tell you “WHO”, which is our primary motivator here.)

The interesting bits are how we identify our suspect, our ‘killer’ if you will.  See, we not only want to know who they are in the database context, we also (and likely, more importantly) want to know who they are at the server level context.  And just in case they’re impersonating another login, we want to check that too.

So we actually have a lot of options here.  There’s CURRENT_USER or USER_NAME(), for the DB context user.  Then we have SUSER_SNAME(), SUSER_NAME(), SYSTEM_USER, and ORIGINAL_LOGIN() for the server context.  If you’re curious, you could also get things like @@SPID (server session id), SUSER_ID() (server login id), and SESSION_USER (database session user).

ORIGINAL_LOGIN() may be the most potentially interesting, especially if we want to write our trigger with elevated (impersonated) permissions to be able to write to the logging table that we’ve set up to capture its detective-work.  I did not need it for this example, but it’s worth keeping in mind.

why dont you take a seat over there
We’ve been watching you…

The Sting

So we’ve got our evidence table, we’ve got our detective trigger, now we just need a suspect.  Thankfully we can test it out first, to make sure our operation will succeed when the real perp comes along.  We can do this, of course, by impersonation.  Or by using different SSMS query-windows with different logins — your choice.

Our faux-suspect’s login name is DummySuspect.  We map him to the db_datawriter and db_datareader roles in our CSI database — we know the real perp at least has write permission on the table dbo.Victims, otherwise he/she wouldn’t be able to delete those poor victim rows!  And we’re probably the db_owner, which is fine.  Let’s call our own login SergeantX.

Now we can pretend to be DummySuspect and execute a DELETE against CSI.dbo.Victims , and make sure it writes to our auditing table, which we called  aud.Evidence.

Yes, in practice, we’d probably want to put our Evidence table in a separate database, to really ensure those pesky Suspects can’t update it or delete from it, i.e. “cover their tracks” — here, I’ve settled for simply using a different schema, to keep the example workable.  Otherwise we’d have to deal with cross-DB permissions and such, which goes beyond the scope of one little blog post.

Ready?  Let’s try it!

The Proof

Go on over to the GitHub repo and check out the code.  There are 3 easy steps – ‘Step 1’, create the objects, including a new DB to house them called CSI.  You’ll see the trigger in there as well.  Then you can try ‘Step 2’, where I impersonate DummySuspect and delete a row from Victims, and then check the Evidence log when done.  And finally, ‘Step 3’ is a similar test, but assumes that you’ve actually connected that SSMS window/tab/query AS DummySuspect instead of impersonating him (or her!).  After you’ve done that, check out aud.Evidence again to make sure it logged the 2nd delete.

And there you have it.  A simple example of how to write and test an after delete trigger that writes the action info to a separate auditing table for investigation.

Hope you enjoyed!  Leave a comment here or on GitHub; I welcome all feedback.

One more thing…

Apparently I’ve been fork‘d!  Someone on GitHub liked my take on the Nested Set Model so much that they decided to pull it into their own library and have a play with it.  Yay!!  😀   Whoever you are, thank you and good luck building with it.  Enjoy!

someone forked my sql nested set model on github
Mr. Murray, thanks and have fun!