The EAVil Cycle, Part 2

In which we discuss more about the EAV model and some of its merits and pitfalls.

Advertisements

continued from last week…

The Ugly (aka the “Wow really?!?”)

You’ll see this ‘creep’ even in product-catalog systems as mature as Amazon’s. If you search for (geeky as we are) graphics cards, and use the product attribute filters in the left pane to narrow it down, you’ll find that some correctly have their memory type (GDDR5, etc.) listed, while others may not. If you’re really unfortunate, there will be two semi-redundant attribute-sets that you’ll have to “juggle” between to really get at what you want. TVs, for example, may see both an “HDR support” (yes/no) and an “HDR type” (standard, ultra, etc.) — I’m kinda pulling those out of my arse for example’s sake, but you get the point.

Why does this happen? Because at some level, humans are still responsible for ‘tagging’ those products (for lack of better word). And as much encouragement and guidance as the ‘admin software’ may give them, they can (and do) still decide at times to side-step that guidance and say “Nope, I know better; make me this new thing!”

But isn’t that a problem with nearly all data-driven systems? Sure, of course it is. Yet with a model as flexible as EAV, the problem is intensely magnified by the fact that it’s made so easy to do — to ‘extend’.

so preoccupied with whether or not you could, you didn't stop to think if you should
It’s probably not the exact quote, for you pedants, but it’s close enough for government blog work.

And unfortunately, the biggest contributor to this problem is the lack of development-time and forethought given to the administration, or management, of the data. You see, this problem could be largely assuaged if the admin-toolset were the FIRST thought and priority in the roadmap. But so often, that thought comes LAST, if at all. So sure, your product feature tagging system looks great, it’s flexible and your customers love it. But you’re throwing tickets over the wall to your data team every time a requirement or use-case changes, or when you need to fix a data-quality problem caused by the users not knowing there was already a “Widget Type” before creating their new “Widget Kind” tag, or misspelling “Is Wierd” because English is weird and has more exceptions to the “I before E” rule than not.

Does this problem go away with a dedicated search-index or NoSQL technology like Elasticsearch or (shudder) MongoDB? Of course not! If anything, it may be worse. Maybe. But wait, those systems make it easier to de-dupe and manage redundancy & data quality, don’t they? Probably. I can’t speak from experience myself, but I’ve heard good things. Once again, it all comes down to the effort you’re willing to invest in the system. If you make data quality a priority, you’ll be happier with the experience. If you don’t, well you’re just another amateur data scientist complaining about dirty non-standardized/non-validated address fields, aren’t ya?  =P

I joke with the data scientists, of course. What they do is awesome. They just tend to often re-invent the wheel of data-cleansing/data-wrangling that we DBAs have been doing for a few decades, because they didn’t know the right questions to ask or the right place to look. We need to get better at working together WITH them, not ‘for’ or ‘against’ them.

ninja cat riding a unicorn with laser-eyes
How the data scientists see themselves…

The Why or When (aka “Is it a decent model for this?”)

The long-story-short version is, consider your business and your data. Try to plan for the future, and anticipate potential changes and growth. It’s not easy, and we never “get it right the first time”. But we can try.

When your attributes are fairly static, and you know that you can tightly control them, you might consider a more rigid model. Something with a handful of lookup tables referenced by the main product entity. This is advantageous for performance and management, at the expense of scalability and extensibility.

When you literally need to support on-the-fly extension, and you’re absolutely married to SQL (i.e. not ready to venture out into NoSQL land just yet), the EAV model may fit the bill. Aaron’s article, and the comments therein, present some fairly valid and reasonable implementation suggestions to make it a little more palatable. Just beware the date — that was written back in 2009. Before we had such things as Elasticsearch and its ilk. I’d heavily encourage the consideration of purpose-built data-stores for this sort of thing, if you have any hope of scaling-out.

Other tools in your toolbox can help with this, too. For example, consider an in-memory data-grid for super-fast reads. The vast majority of data-access to these attributes & values is going to be reading, using it to filter & slice & dice a data-set. You can pay the small performance cost (e.g. write to the underlying SQL database) on the rare occasion when a write/change needs to occur.

In Conclusion

Proving the age-old rule of “Just because you CAN, doesn’t mean you SHOULD”, the EAV model is sometimes okay and sometimes not. You need to understand your business and your data to make that call. And you need to consider the magnitude of effort that may be involved in pivoting from one model to another. Unfortunately, in many cases, that part overshadows the rest, and the show business must go on.

queen the show must go on
You’re welcome again, ears.

Still, I encourage you to at least think about it, and be ready with that knowledge of pros/cons when the time is right to discuss it with stakeholders.

The EAVil Cycle

In which we discuss the EAV model and some of its merits and pitfalls.

EAV, or Entity-Attribute-Value, is an data model that’s been around the block. It’s typically injected into a relational database at some point during the overall application/architecture life-cycle, somewhere between when the team realizes that they’ve got way too many lookup tables for a “main business entity” thing, and when they finally make the shift into polyglot data stores.

Wake me up when that actually happens, successfully.

I’m not going to rehash aging internet arguments here, nor bore you with replicated diagrams that you could just as easily look up on Google Images. No, instead, I’m going to tell you why this model is not great, why it’s not bad, and how you should go about deciding if it’s not wrong for you.

Yes, I did negate all those adjectives on purpose. Mostly because nobody really enjoys working with these structures, regardless of how they’re implemented; customers and business stakeholders are ALWAYS CHANGING the requirements and the attributes in question. But, we press on.

Refs:

PS: Postgres is looking really freakin’ cool these days.

the good the bad and the ugly
Another Clint Eastwood pic… apologies if you’re not a fan. =P

The Good (aka “Not Bad”)

Proponents tell us that this model is easily searchable and easy to administer. The “searchable” bit is true; especially when you have the attribute-values pre-defined and don’t rely on end-user text-entry. But that’s true of basically any data model. The key here is that all attribute-values are effectively in one “search index”. But wait, don’t we have purpose-built search indexes nowadays? (Hint: see Elasticsearch.) This will come up again later.

Administerable? Administrable? Administratable? Damn you English! Anyway. Yes, again, if you’re fairly confident in your business users’ ability to effectively track and de-dupe (de-duplicate) incoming requirements/requests using their own brains/eyeballs and the admin tool-set that you build for them.

Oh, right, did I mention that? You have to build the admin app. Because you do NOT want to be writing ad-hoc SQL queries every time a new attribute requirement comes in. (Still, it’s better than making schema changes for new req’s, as I’ll discuss in a bit.)

Mainly, though, the biggest ‘pro’ of this model is that your business requirements, i.e. your attributes and the values they’re allowed to contain, can be flexible. The model allows a theoretically infinite amount of customization to suit your needs; though in practice, as Allen writes in the CodingBlocks article, you do run up against some pretty hard scalability hurdles right-quick. So in practice, you might want to consider more horizontally-scalable data stores, or (God help you) try scaling-out your SQL databases. (Spoiler-alert: big money big money!)

shut up and take my money
Millions of query-bucks…

The Bad (aka the “Not Great”)

Which brings me to the first ‘con’. Performance. If you’re modeling this in a relational DB, and you expect it to scale well, you’re probably overly optimistic. Or very small. (If the latter, great! But you don’t always want to be small, right? Right!)

Don’t get me wrong; you can make it work half-decent with good indexing and sufficient layers of abstraction (i.e. don’t write a “kitchen-sink view” that’s responsible for pivoting/piecing-together all the attributes for a product straight outta SQL). But again, is it really the right tool for the job?

Momentary digression. SQL Server, or more generally, the relational database, has long been touted as the “Swiss army knife” of IT; we’ve thrown it at so many problems of different size and shape, that we’ve almost lost track of what it’s actually very GOOD at. Hint: it’s about relationships and normalization.

Another argument against it seems to be data integrity and enforcement. I can understand that, but again, with some clever software overlay and user-guidance, this can become almost a non-issue. But remember, your developers are the ones building said software. So that effort needs to be considered.

The Ugly (to be continued…)

The biggest problem, and quite a legit one, is ‘creep’ — both scope and feature. See, the inherent flexibility in the model will almost encourage data managers to be less careful and considerate when deciding when to add an attribute or value-set, and how to govern the data-set as a whole.

creep wish i was special
No, not THAT creep. But, you’re welcome ears.

Stay tuned for more…

Keeping Track of SQL Scripts with SVN

I’m talking about a locally stored, locally managed, only-ever-have-one-user, repository of SQL scripts.  All it really needs to do is track changes on a daily (ish) basis.

Inspired by a Slack conversation with friend & former colleague, and building off last week’s T-SQL Tuesday post:  How do I keep track of my SQL scripts?

The answer shouldn’t surprise anybody at this point: version control!

Side-note:  I really should try using SSMS Solutions/Projects like Kenneth Fisher advocates, but I just haven’t gotten a chance yet.  It would probably help with organization.

Well, perhaps you’re still in for a surprise.  Because, you see, as much as I love using GitHub and Gist for my blog code samples, much of that love is lost in the daily grind of fix-it-tickets, BI building blocks, report development, and performance troubleshooting that comprises my business as usual.  So forgive me if I haven’t made the leap to Git like all the cool kids are doing.  I’m talking about a locally stored (i.e. on my very own hard-drive), locally managed, only-ever-have-one-user, repository of SQL scripts.  All it really needs to do is track changes on a daily (ish) basis.

That’s what good ol’ Apache SVN is for!  Yes, boys & girls, it’s still around.  Consider it one of the many enduring foundational technologies that we tend to take for granted, or forget about, in the ever-present onslaught of the “new and shiny”.

But fortunately for us luddites, there are a couple GUI tools out there that make it feel less like the ’90s green-screen.  Enter VisualSVN Server (free edition) and TortoiseSVN (also free).

Eeww, yuck!  A non-distributed VCS??  What are you, some sort of wild animal!?  The impudence!

the audacity -the grinch
The unmitigated GALL!!

Okay, calm down there sparky.  Remember my requirements?  They’re only about 5 sentences up.  Go back and read them again.  Breathe.  We good?  Good.

Another reason it doesn’t need to be distributed or ‘cloudy’ or web-based is that these scripts often contain private company I.P., so data governance demands that they stay within company IT space.  And sure, there are private repos and ways to keep the stuff locked-down within a GitHub or similar, but again I ask, why?  The first and primary requirement of my VCS is that it stays the heck out of my way when I’m working.  I’m sure someone will counterpoint me, and I’d love to hear it.  But for now, we’re keepin’ it local.

Getting Set Up

The first step, if it’s not obvious, is to install VisualSVN Server — it already contains the binaries for Apache SVN, so you don’t need to worry about that.  It’ll ask you where to stick your repos and your backups; you can see an example of mine here:

visualSVN server config screen
Normally you’d point Repos at a network location, but since we’re setting this up as a local-only server, just pick a drive/directory that’s easy to remember.

Once it’s done, let it start up the manager GUI.  Next step is to create a new repo:

VisualSVN server repo create new
You can also use the “Create new repository…” link located a bit down the intro page.

I called mine “SQL_Scripts”.  Because, as my blog’s tagline says, “why yes, I do like underscores.”  When you go thru the steps, it’s easiest to leave everything at the default settings — that way you don’t need to futz with permissions or anything.

One ‘gotcha’ to note: in a corporate domain setting, you should set the server name to your machine’s FQDN, e.g. MYWORKSTATION.company.com.  This may be done for you when you install, but I’m not 100% sure, so it’s worth checking — you can right-click on the VisualSVN Server (local) node in the left pane and go to Properties, to the Network tab, and verify it.

VisualSVN server properties Network tab
Just to be sure!

Next, install Tortoise SVN, or your favorite SVN client.  I chose Tortoise because it’s easy to use, includes some very useful File Explorer options (right-click menu goodies), and supports standard command-line interaction just like vanilla SVN.  We’re going to use said CLI interaction in just a bit!

Import (or is it Export?)

I almost always have trouble remembering which option is for use with a non-empty folder of “here’s a bunch of files that I want to dump into the repo to start with”, vs. “here’s an empty folder where I want to pull down the contents of an existing repo”.  Fortunately, Tortoise yells at you if you try to do the latter — which is Export — into a non-empty folder.  So we want to Import.  Assuming you have a folder where all your SQL scripts live already, right-clicky and say “Tortoise SVN .. Import.”

TortoiseSVN import dialog
You can use the file:/// notation or the https:// address, either should work.

You can verify that it worked by switching back to your VisualSVN Server for a moment, refreshing it, and seeing that the repo’s contents are now.. your happy files!

But wait… the folder I imported into the repo doesn’t have the pretty little icon overlays showing me that it’s in SVN… Why?

Don’t panic.  We have one more step to go.

Right-clicky again!  On the same folder you imported into SVN.  You now want to “SVN Checkout…”, which will essentially mark all your files as “versioned”, because it will see that they all match exactly what’s already in the repo (because you just imported them a few moments ago).

There’s an important ‘gotcha’ here, if you named your repo something other than the folder name that’s serving as your repository root.  By default, it will try to pull the repo’s contents into a sub-folder of the same name as the repo.  In the example below, that’d be “Workspace\PersonalScripts”, instead of just “Workspace”, which is where I want it to go.  This has happened to me more than once.  Check it out:

SVN checkout with folder names
Notice that the repo name is different than my root (source/destination) folder name — that’s OK, as long as I set it correctly here in this dialog.

Pull the trigger, and you’ll see all your files getting ‘Versioned’, like so.

svn checkout finished
It’s really just comparing the repo contents with the folder contents and saying “OK, got it, we’re now tracking this file for version-control.”

Yay?  Yay.  Now for the cool part, where you can stop thinking about all of this and just let the machines do their work.

Automatic Nightly Check-In

Windows Task Scheduler, specifically.  Let’s be honest, if you’re reading this far, you’re not really a CLI jockey.  And you’re certainly not a Linux geek — you could have had all this set up and done with about 5 lines of bash, I’m sure.  Party on, Wayne.  Us Garth’s still need a bit of help.

I’m not going to do a walk-thru screen-shot montage of Task Scheduler; it’s pretty idiot-proof if you read & understand the dialogs.  Here’s the key bits, the commands that you’ll actually want to enter as the actions.  We have two actions: first “add all new files to the repo”, followed by “commit everything new & changed”.  Schedule it however often you’d like; mine happens nightly at 8pm, when I’m about 99.9% sure I won’t be touching work.  Oh, make sure to set it to “Run whether user is logged on or not”.

The actions are both “Start a program”, and said program will be the SVN executable, which should be (by default) located at C:\Program Files\TortoiseSVN\bin\svn.exe.  Here are the arguments, respectively (1st action, then 2nd), subject to personalization of course!

add --depth infinity --quiet "C:\Users\Documents\Your\Scripts\Folder" --force --username YourUsername

commit --depth infinity --message "daily" "C:\Users\Documents\Your\Scripts\Folder" --force --username YourUsername

Commence Laziness!

And that about does it!  Now, without any need to remember any command-line syntax, or to touch a right-click menu again, your SQL scripts are being silently versioned and committed every night to your very own local repository.  If you ever forget when or what you did last week or last month, or need to “blame” yourself (which is short-hand for “show me all the changes from this part of this file so I can see what led to its current state”)… now you can.

PS: If you’re using a fantastic 3rd party diff/compare tool like I am, be sure to plug it into the TortoiseSVN options for diff-viewing/comparison.  While their built-in interface ain’t too bad, BeyondCompare and other similar tools are even better.

Thanks for reading!

Quickie: Read & Search the SQL Error Log in AWS RDS

Go check out your AWS RDS SQL Error Logs!  See what tickles your curiosity.

Today’s post is brought to you by an unplanned AWS RDS outage, the desire to search its SQL Error Logs for events during the target time-frame, and the horrendously inefficient interface which AWS gives you in the GUI console for reading said logs.

it's log from blammo (ren & stimpy)
A coworker of mine used to sing this a lot. Now I know what it’s actually from.

Short Outline

  1. Even the dedicated ‘admin’ user that you create for your instance, doesn’t have permission to read the error logs via the SSMS GUI nor with xp_readerrorlog.  You can, however, use sp_readerrorlog.  That’s with an ‘S‘.
  2. The parameters here are quite arcane, namely@1, @2, @3, @4.  Unfortunately, none of them allow you to filter on a time-span (those are the 5th and 6th parameters of the underlying xp, which we don’t have access to, as per #1).
  3. My solution involves a #temptable, a loop of insert / exec commands, and then whatever queries you desire to search/filter/analyze the results.  I also throw in a conversion to local time (from UTC, which is what the AWS servers use).

Details and The Why

You can check out the script; it’s short enough to embed, IMHO, so I’ll just leave it right here.

Line 25-26 is particularly interesting to me, and only works with SQL 2016 and up.  I got the idea from this StackOverflow answer.  You can chain two AT TIME ZONE commands together to convert a given datetime value from one zone to another.  The reason you still need the CONVERT is because the output of the AT TIME ZONE command is always a datetimeoffset type, which, while quite useful in its own right, has its quirks, and doesn’t serve our purposes for ease-of-readability.

If you’re not running 2016, at least in RDS, you’ve got nearly no excuse.  The upgrade process is vastly simpler with RDS than with traditional on-prem servers.  Although, I did run into my share of snags with it recently, which I’ll blog about later.

You should plug in whatever values suit your need & environment — the @NumLogFiles and @StartDate & @EndDate.  I used 2-2:30am, because… well, that’s always when those damn outages seem to happen, ain’t it?

2am friday alert that load on aws rds server is critical
I’ve never actually had a pager, but I hear awful things. =P

As I mentioned, “the Why” is basically because AWS RDS limits your permissions (even as an admin) in some key ways, and one of those limitations prevents you from reading the error logs in the more “normal” ways — SSMS GUI, xp_readerrorlog, etc.  And the interface given to read the logs in the AWS console GUI is quite a sad-panda.  They offer a wrapper proc rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1 , which really only serves the purpose of naming the parameters more nicely; under the hood it’s doing the exact same thing.

Of course we can’t prove that, because it’s encrypted, but the results are identical.

So there you have it.  Go check out your AWS RDS SQL Error Logs!  See what tickles your curiosity.  =)

T-SQL Tuesday #104: Code You’d Hate to Live Without

And that’s where we could use a little encouragement, I think — another DBA saying “Yay, it’s not just me!” makes it all worthwhile.

It’s that time of the month again!  Bert‘s fantastic invitation complete with YouTube vid is tempting indeed.  There are so many wonderful community scripts and tools for SQL Server DBAs.  But, in an interesting twist, he specifically asks us about something self-written or self-created.  And I’ll admit, I sometimes have trouble being exceptionally ‘proud’ of my home-brewed stuff.  Sure, I’ve blogged and GitHub‘d, and hopefully those have helped someone along the way.  Most of the time, though, those are specifically suited to a given use-case.

The more difficult code to share, I think, is the stuff that we keep in our “daily-grind” “Get-Stuff-Done” folders.  Scripts that we’ve tweaked and commented sporadically throughout the years, finding this edge-case or that tweak for a given scenario, most of which ends up being pretty environment-specific.  And that’s where we could use a little encouragement, I think — another DBA saying “Hey, it’s not just me!”, or “I knew there was another guy/gal out there who always has to do this kind of thing!”.  Thus, here I will attempt to show off something along those lines and see what you kind folks think.

Where Does it Hurt?

No, I don’t mean the famous wait-stats queries by Paul Randal (those are awesome).  I mean, what are the top few areas of your SQL environment where you’re always questioning something, or being asked to find some answer that’s not immediately obvious?  For me, there are 3.

Replication

Transactional replication is, in a word, ‘brittle’.  It works well when it’s working, and usually you don’t have to think about it.  But when you do need to check on it, or change anything about it (god forbid), or troubleshoot it (more like shoot it, amirite?), it feels a bit like trying to mess with a half-played Jenga stack.  Meaning, you might be just fine, but you might send the whole thing crashing down and have to rebuild it all.

peanut-brittle-from-the-joy-of-baking
As brittle as this stuff. But not nearly as delicious.

I won’t go into the whole troubleshooting aspect here, that’s too much scope.  But there’s a question that can often come up, especially from Devs or Biz-Analysts, and that is: “Hey, is TableX being replicated?”  And rather than subject my poor eyeballs to the replication properties GUI, I just run a little query, which reads from the distribution database and the actual database that’s being published (the ‘publisher’ db), and tells me a list of tables & columns that are being replicated.

Here it is.  Be sure to replace [dbName] with your actual published DB name.  Like StackOverflow or WideWorldImporters, or AdventureWorks <shudder>.

Report Subscriptions (SSRS)

Another question I’m often asked is, “Hey, did ReportX run?”  What they really mean is, “Did ReportX‘s email subscription get sent to BigWigUserY?”  We have an unholy amount of SSRS reports with email subscriptions.  And because I don’t care to bloat my inbox by BCC’ing myself with every single one, I rely on the users to speak up when they don’t receive something they’re expecting.

“This is a terrible idea.”, you say.  “Never trust the users!”

Yes, well, such is life.  If you have a better lazier solution I’m all ears.

So here’s a little script I wrote to query the ReportServer database (which MS will tell you is “officially unsupported”, snore).  If I know some keyword from the report title, or the supposed recipient’s email address, this will tell me if it ran successfully or not — LastRun, LastStatus.  A couple other useful bits: where it lives, ReportPath, and what its SQL Agent Job’s Name is, JobName.

That last bit is peculiar.  The JobName looks like a GUID (because it is, because SSRS just loves GUIDs), but it’s also the actual name of the agent job, which you can use to re-run said job — via exec msdb.dbo.sp_start_job — if the failure wasn’t systemic.  As I often do.

Disk Space

Last but not least, everybody’s favorite topic to try and forget about or pawn-off to the SysAdmins.  “How much space are those databases / data files / log files eating up?”  Well, mister suddenly-cares-about-disk-space-but-is-OK-with-storing-all-domain-users’-iTunes-music-libraries-on-the-central-fileshare-along-with-their-documents-because-that’s-what-we’ve-always-done.  {True story.}  Let me tell you.

keep calm and release the bitter
It’s healthy, I swear!

This script has a lot of comments and commented-out lines because I will tweak it depending on what I need to see.  Sometimes it’s “show me the DBs where the logical filename doesn’t follow my preferred pattern” (the DB name with ‘_data’ or ‘_log’ after it); or perhaps “Only show me files over 5GB with a lot of free space” when I’m itching to shrink something.

“Never shrink your files!  (In production!)”

Yes, thank you, knee-jerk reactionary.  I’m dealing with servers in lower environments, usually, with this one.  😉

What do you think?

I’d love to hear your feedback in the comments!  Happy Tuesday!!  =)

Credit where credit is due.

I did not magically come up with these all by myself.  They’re pieced together from many a StackOverflow answer and/or other community blog post or contribution that I’ve since forgotten.  I usually store a link to the source in these kind of things, when it’s true copy-pasta, but in these cases, I added enough of my own tweaks & style that I no longer tracked the original linkage.  If you see anything that looks familiar, please do tell me where to give kudos and shout-outs!  😀

Quickie: Use of APPLY Operator

There are many great use-cases for the T-SQL APPLY operator, and if you haven’t tried it yet, I encourage you to check out Kevin Feasel’s excellent presentation on it here.  Today, I want to demonstrate a particularly interesting use-case (at least, to me!) that I found for it while tackling a real business problem — for reporting, specifically, which is what I spend a lot of my time dealing with lately.

The Setup

We have a “history of estimated market values” table, which stores, for some arbitrary dates based largely on user input, values (prices) of our proverbial Widgets.  In other words, we can’t guarantee we have every date in there for a given time period.  Over on side B, we have a “sales” table, which tells us when a given Widget was sold and for how much.  The business question, simply enough, is “When a given Widget was sold, how accurate was its ‘estimated market value’ at the time?”  Or, put another way, “How closely did the sale price of that Widget match our estimated market value?”

we don't make widgets
Yeah, yeah, we’re all tired of the Widget business. If only we could come up with a better term for a generic product to use in discussions of business problem samples.

The Tools

I used two interesting bits of TSQL to accomplish this, the main one being our lovely APPLY operator.  OUTER APPLY, in this case, because I still wanted the rows from “sales” even if we couldn’t find a closely-matching “market value” record.

If you haven’t used this operator yet, think of it conceptually like a JOIN to a sub-query, but with the added value that you can reference the outer tables’ columns within it.  CROSS APPLY is like INNER JOIN, and OUTER APPLY is like LEFT JOIN.

Sorry, RIGHT JOIN fans, no love for you here.  You’re just too weird.

My other little trick involved using TOP(1) in the apply’d sub-query, with a non-standard ORDER BY clause.  As I mentioned, within the APPLY‘s body, you can reference the outer table’s columns.  So I’m easily able to compare the SaleDate (outer table) with the EstimateDate (inner query).  I want “the closest EstimateDate to the SaleDate​”, which means I want the row where the difference between those two dates is the smallest.  Which means making use of our friend DATEDIFF.  So let’s try:

ORDER BY DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)

Do you see the problem yet?  If I get a negative value, i.e. my SaleDate is way before my EstimateDate, say -100 days, that’s the top 1 row.  I don’t want that!  Let’s try our old friend from many many math classes ago, Mr. Absolute Value.

ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)

That’s better!  Now I have the top 1 “closest to my sale date” row from MktValueHist.

All Together Now

I’ll put up a Gist with repro/demo code soon.  Here’s the meat of it, as a sample select, for convenience:

SELECT Sales.WidgetID, Sales.WidgetName, Sales.Price, Sales.SaleDate
, mvh.MarketValue, mvh.EstimateDate
, Accuracy = some_made_up_mathy_thing_here
FROM Sales
OUTER APPLY (
SELECT TOP(1) mValHist.MarketValue, mValHist.EstimateDate
FROM MarketValueHistory mValHist
WHERE mValHist.WidgetID = Sales.WidgetID
ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, mValHist.EstimateDate)
) mvh
WHERE Sales.SaleDate >= '20180101'

There, my completely fabricated yet totally-based-in-reality example of how to get 2018’s Widget Sale Prices with corresponding “closest to the sale-date” Market Value Estimate from our history table.  You could even throw in some fancy math expression for “accuracy”, if you felt like it.  Like maybe “relative difference“.  =)

Note: For simplicity, we’re dealing with “whole dates”, i.e. just date.  Not datetime or datetime2.  If your date values do include times, you’d want to change the datediff interval to something more appropriate, like second or millisecond.

cross apply explained vs inner join and correlated-subquery
Forgive the graininess, it was from a Youtube screen-cap.  In an Oracle presentation, of all things. And I feel like they meant to say “Data exposition“, not “explosion”, given the latter can have negative connotation.  But.. hey, it’s better than a poorly drawn stick-figure or an inappropriate meme, right?  RIGHT?

And that’s how APPLY and ABS() saved my bacon today.  Hooray!

Have an interesting use-case for APPLY?  Or perhaps an alternative approach to a similar problem?  I’d love to hear from you!  Drop me a comment, a tweet, or a LinkedIn msg.  Happy summer!


PS: I’d also recommend checking out Steve Stedman’s useful “Join types poster“, if for no other reason than to have something concrete to use when explaining those concepts to your fellow cube-dwellers.  It has cross/outer apply on the 2nd page in their more commonly used pattern, the TVF (table valued function) relationship.

PPS: Fine, have your meme…

apply yourself, from breaking bad
I’ve never seen the show, so I don’t get it, but it was one of the first Google Image results for “apply yourself meme”. Enjoy that.

A SQL “Whodunnit” Trigger

Triggers aren’t bad, if used for the right reasons.. Here we look at an “audit-trail” use-case.

Inspired by a brief conversation in the #CodingBlocks community Slack: A short discussion and example of a “who-dunnit” (“who done it”, a colloquialism for a murder-mystery type thing) trigger, to find how what user is doing deletions against a certain table.

the cast of CSI LV
The original, or nothing.

The Background Check

Let’s name our hypothetical database CSI.  In it, we have a table, dbo.Victims, where it seems like data is being randomly deleted at random times.  As we all know, this is impossible — computers never do anything truly randomly, much less RDBMSes.

Insert witty counter-example here.  You know you have one.

So we want to find out who’s doing these deletions.  One DBA says, “Hey I got an idea… Let’s put an after delete trigger on the table!”  Another DBA says “I abhor triggers; let’s log sp_WhoIsActive every 5 seconds to try to catch the suspect ‘in-the-act’.”

Both approaches have their merits, and neither is that uncommon.  However, the latter is much more regularly blogged about, so I’m going to present the former, because it kinda helped remind me of a few things that I hadn’t used in a while.  I’d also argue that the latter is much less of a “guaranteed capture”, since you’re gambling pretty liberally on the fact that the delete transaction will even last that long; it’s statistically more likely that you’ll miss it.

The Setup

Here’s a SQL snippet that shows a basic after delete trigger created on our dbo.Victims table.  Notice the use of the special Deleted table reference — this is a “temporary, memory-resident” table according to the Docs, and it holds all the records that were/are-about-to-be deleted from the target table.

I feel like it used be called a “temporal table”, but that now refers to a new feature in 2016, where SQL keeps a hidden history-tracking copy of your table that you can reference like a time-machine; which, incidentally, almost* negates the need for such things as these triggers we’re talking about, but that’s another topic for another time.

*(The ‘almost’ is because temporal tables don’t tell you “WHO”, which is our primary motivator here.)

The interesting bits are how we identify our suspect, our ‘killer’ if you will.  See, we not only want to know who they are in the database context, we also (and likely, more importantly) want to know who they are at the server level context.  And just in case they’re impersonating another login, we want to check that too.

So we actually have a lot of options here.  There’s CURRENT_USER or USER_NAME(), for the DB context user.  Then we have SUSER_SNAME(), SUSER_NAME(), SYSTEM_USER, and ORIGINAL_LOGIN() for the server context.  If you’re curious, you could also get things like @@SPID (server session id), SUSER_ID() (server login id), and SESSION_USER (database session user).

ORIGINAL_LOGIN() may be the most potentially interesting, especially if we want to write our trigger with elevated (impersonated) permissions to be able to write to the logging table that we’ve set up to capture its detective-work.  I did not need it for this example, but it’s worth keeping in mind.

why dont you take a seat over there
We’ve been watching you…

The Sting

So we’ve got our evidence table, we’ve got our detective trigger, now we just need a suspect.  Thankfully we can test it out first, to make sure our operation will succeed when the real perp comes along.  We can do this, of course, by impersonation.  Or by using different SSMS query-windows with different logins — your choice.

Our faux-suspect’s login name is DummySuspect.  We map him to the db_datawriter and db_datareader roles in our CSI database — we know the real perp at least has write permission on the table dbo.Victims, otherwise he/she wouldn’t be able to delete those poor victim rows!  And we’re probably the db_owner, which is fine.  Let’s call our own login SergeantX.

Now we can pretend to be DummySuspect and execute a DELETE against CSI.dbo.Victims , and make sure it writes to our auditing table, which we called  aud.Evidence.

Yes, in practice, we’d probably want to put our Evidence table in a separate database, to really ensure those pesky Suspects can’t update it or delete from it, i.e. “cover their tracks” — here, I’ve settled for simply using a different schema, to keep the example workable.  Otherwise we’d have to deal with cross-DB permissions and such, which goes beyond the scope of one little blog post.

Ready?  Let’s try it!

The Proof

Go on over to the GitHub repo and check out the code.  There are 3 easy steps – ‘Step 1’, create the objects, including a new DB to house them called CSI.  You’ll see the trigger in there as well.  Then you can try ‘Step 2’, where I impersonate DummySuspect and delete a row from Victims, and then check the Evidence log when done.  And finally, ‘Step 3’ is a similar test, but assumes that you’ve actually connected that SSMS window/tab/query AS DummySuspect instead of impersonating him (or her!).  After you’ve done that, check out aud.Evidence again to make sure it logged the 2nd delete.

And there you have it.  A simple example of how to write and test an after delete trigger that writes the action info to a separate auditing table for investigation.

Hope you enjoyed!  Leave a comment here or on GitHub; I welcome all feedback.

One more thing…

Apparently I’ve been fork‘d!  Someone on GitHub liked my take on the Nested Set Model so much that they decided to pull it into their own library and have a play with it.  Yay!!  😀   Whoever you are, thank you and good luck building with it.  Enjoy!

someone forked my sql nested set model on github
Mr. Murray, thanks and have fun!