SQL Style Guide – a Rebuttal

Both the original article, and my responses, are guidelines, meant to help you start a conversation with your own team about how you can get better at writing code.

Advertisements

In what will surely be a controversial post, I give my take on some of the major points of this “SQL style guide” that made the rounds on Hacker News / Reddit recently a few years ago.  Hey, I never claimed to be a source of breaking news.

stamp of controversy
Feels so empty, without me…

Now remember kids, these are opinions — everyone’s entitled to theirs, and it doesn’t mean you’re right or wrong.  As the author repeatedly points out, consistency among a team / project / environment is more important than anything else.  Both the original article, and my responses, are guidelines, written by people with some experience (guided by others with more experience) to help you start a conversation with your own team about how you can get better at writing code.  Because that’s what we’re paid to do, among other things.

Basics

I agree with most of the points here:

  • Consistent descriptive names (I would add “concise” too — autocomplete/intellisense has come a long way, but we’re still typing stuff sometimes)
  • White space & indentation
  • Datetime literals – hail Saint Bertrand
  • Comments – you, dear reader, know my thoughts already
  • Some OOP principles should be avoided because they usually lead to poor performance
  • Hungarian notation is best left in the ’80s where it belongs

Something I don’t abide by:

  • Sticking to the ‘standard’ and void vendor-specific functions/features

Some brief justification for my rejection:

Database code portability is largely a myth — it sounds great, in theory, but most teams/projects never actually do it.  If your data persistence layer is ever slated for migration to a new tech-stack, it’s going to require a massive overhaul anyway, in which those vendor-specific functions/code-bits will the least of your worries.  More likely, you’ll be swapping-out pieces of the data layer in small chunks to move into other storage tech, like a NoSQL store or a DocumentDB or something; and eventually over time, the whole original data layer will have been moved, and the concern over SQL code portability will be moot.

Furthermore, database vendors give you these features/functions for a reason — they’ve found that it greatly enhances their product and the productivity of developers who work with it.  So why not take advantage?

Finally, if your application is ‘cutting-edge’ enough that ALL db access is done via ORM or some kind of repository layer in the code-base… guess what?  You don’t have this problem in the first place!  Because the database is a dumb state storage mechanism, containing little to no actual code whatsoever (storec procs, functions, etc.).  So, port away!

now that's what i call edgy
I can almost feel the edgy-ness…

Other basic issues:

  • CamelCase (actually TitleCase) is pretty standard in a lot of DB schemas, and I see nothing wrong with it.  Despite my love of underscores (snake_case) , it does make for more awkward typing.
  • Plural or singular entity names should match the convention preferred by your overlaying ORM, if you’re at the point of DB design; most of the time, though, you’re working with a DB that you’ve inherited and you have no control over entity naming anyway, so stop whining about it and get on with life.
  • Leading vs. trailing commas: I prefer leading, but the arguments against it can sound convincing (not just his, but in general in the tech community) — my experience leans toward being more likely to futz with the middle-to-end of a list than the beginning (1st item), thus making the leading commas more likely to help, but that’s just me. Also, thanks to an awesome member of the Coding Blocks Slack, a point in my favor is that source code comparison (diff) tools will only show the one changed line instead of two, if you’ve had to add to the end of the column list.

Naming Conventions

Yes, please avoid reserved keywords, replace spaces with underscores (or use TitleCase to avoid having spaces), and use concise yet meaningful table aliases when you’re writing queries.  I still remember, when I first started working at my current company, literally gasping and cursing under my breath when I found that some databases actually had a space in the name.

Beyond that, the article goes a bit too deep in the weeds for me, especially the whole “known suffixes” thing — because isn’t that just Hungarian notation on the other end?  How about names that make it intuitive, such as IsActive for a bit flag, or LineNumber or RecordSequence for a sequential integer that’s not auto-generated (not an identity value), or @NumMonths as a parameter for a stored-proc that indicates how many months of reporting to fetch?  Common sense should rule the day, not arcane prefix/suffix conventions that will never be documented or enforced.

White Space

This whole notion of a “river” feels strange and awkward.  It’s made worse by the fact that some clause’s keywords are “too large” for the “standard” river width (which is the width of the SELECT keyword, ish), such as group by and left join).  Plus, I’ve yet to see truly excellent tooling support for this kind of style (be it VSCode, Visual Studio, SSMS, SQL Prompt, or other styling tools / auto-formatters).  Given that I still largely write my code without continuous automatic re-style-on-the-fly styling assistance, I find this hard to digest.

Side-bar: big kudos to the author for pointing me at this typography article, which challenged my long-ingrained writing preference of double-spacing between sentences.  Even now, I do it while I write this post, knowing it’s wrong — I can’t help myself!

For similar reasons, JOINs and sub-queries don’t need to be “on the other side of the river” — since I’m actually saying “there is no river”, what I mean is, don’t indent those JOINs or sub-queries excessively.  In the FROM clause, the JOINed tables are just as important as the first one, so I don’t see the need to reduce their importance by putting them so far to the right.  And please for the love of all things holy, stop putting the JOIN predicates in-line (on the same line) after the joined table — put the ON conditions to their own line and indent it!

Sub-queries are a strange animal, and I won’t dive deep on it here.  Basically, I try to style them the same as I would normally, just indented to the context of their scope; and within Javascript-style parentheses — meaning, open-paren on the preceding line (or its own line), then the body, then close-paren on its own line to finish.

Special note about the “Preferred formalities” section

BETWEEN is mostly evil.  I’m not saying you should never use it; just be very clear about why you’re using it, and only use it with discrete valued types (DATE, INT), NOT with continuous (or conceptually/nearly-continuous) value types (DATETIME, REAL/NUMERIC/DECIMAL).

The UNION operator is often misused, usually because UNION ALL is preferred (and is what you really meant anyway), but a blanket statement to “avoid it” misses the point of why it exists in the first place.  Likewise, temporary tables (#temptables) are wonderful tools when used properly and in moderation, but flagrant overuse can lead to what I call #tempocalypse (which means you’re hammering your TempDB so hard that its underlying storage system screams for mercy).

cry for help cheeto puffs
Like Cheeto Puffs, your TempDB has become bloated and full of cheeze.

Misnamed section “Create syntax”

What he really does here is expound upon table and database design principles.  This does not belong in a “Style Guide”; it probably belongs in a “Design Guide”, but because the relational database as a technology is so mature (yes, that means ‘old’) by this point, most of this feels completely unnecessary and redundant.  And again, you’re working with inherited designs over 90% of the time, where you don’t get to make these decisions, because they were made for you by your predecessors.  If you are so lucky as to be designing a relational model from scratch, look for advice from the tried-and-true architects of the trade.

I do echo and reiterate his advice to look at other data storage tech for things that are not ideally served by an RDBMS, such as EAV models, document storage, key-value storage, text search, etc.  There is no shortage of NoSQL tech to meet those needs, and they will do a much better job of it than you or I could in designing a hacked-up SQL model that somewhat does the job but falls over flat when faced with scaling up to a larger size or heavier workload.

In Conclusion

As I said at the beginning, these are opinions.  I applaud the author’s effort, and the fact that he actually got something together, made it open-source on GitHub, asked for feedback, and actually got traction with several communities discussing it.  That’s more than I can say for myself at this point!

I hope that this spurs some discussion on your database development team, and perhaps helps you work toward making a consistent style guide for your own environment.  Even if it’s just water-cooler talk, it’s always good to remember why we write code: so that others can read it.  (Sure, it’s also to make stuff happen, i.e. “make the computer do things”, but that’s a side-effect — your main goal should always be readability and clarity.)

Do you have some comments, thoughts, disagree? Leave me a comment! I’d love to hear from you.  🙂

needs-moar-drama
Drama-Cat is bored…

Quickie: Read & Search the SQL Error Log in AWS RDS

Go check out your AWS RDS SQL Error Logs!  See what tickles your curiosity.

Today’s post is brought to you by an unplanned AWS RDS outage, the desire to search its SQL Error Logs for events during the target time-frame, and the horrendously inefficient interface which AWS gives you in the GUI console for reading said logs.

it's log from blammo (ren & stimpy)
A coworker of mine used to sing this a lot. Now I know what it’s actually from.

Short Outline

  1. Even the dedicated ‘admin’ user that you create for your instance, doesn’t have permission to read the error logs via the SSMS GUI nor with xp_readerrorlog.  You can, however, use sp_readerrorlog.  That’s with an ‘S‘.
  2. The parameters here are quite arcane, namely@1, @2, @3, @4.  Unfortunately, none of them allow you to filter on a time-span (those are the 5th and 6th parameters of the underlying xp, which we don’t have access to, as per #1).
  3. My solution involves a #temptable, a loop of insert / exec commands, and then whatever queries you desire to search/filter/analyze the results.  I also throw in a conversion to local time (from UTC, which is what the AWS servers use).

Details and The Why

You can check out the script; it’s short enough to embed, IMHO, so I’ll just leave it right here.

Line 25-26 is particularly interesting to me, and only works with SQL 2016 and up.  I got the idea from this StackOverflow answer.  You can chain two AT TIME ZONE commands together to convert a given datetime value from one zone to another.  The reason you still need the CONVERT is because the output of the AT TIME ZONE command is always a datetimeoffset type, which, while quite useful in its own right, has its quirks, and doesn’t serve our purposes for ease-of-readability.

If you’re not running 2016, at least in RDS, you’ve got nearly no excuse.  The upgrade process is vastly simpler with RDS than with traditional on-prem servers.  Although, I did run into my share of snags with it recently, which I’ll blog about later.

You should plug in whatever values suit your need & environment — the @NumLogFiles and @StartDate & @EndDate.  I used 2-2:30am, because… well, that’s always when those damn outages seem to happen, ain’t it?

2am friday alert that load on aws rds server is critical
I’ve never actually had a pager, but I hear awful things. =P

As I mentioned, “the Why” is basically because AWS RDS limits your permissions (even as an admin) in some key ways, and one of those limitations prevents you from reading the error logs in the more “normal” ways — SSMS GUI, xp_readerrorlog, etc.  And the interface given to read the logs in the AWS console GUI is quite a sad-panda.  They offer a wrapper proc rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1 , which really only serves the purpose of naming the parameters more nicely; under the hood it’s doing the exact same thing.

Of course we can’t prove that, because it’s encrypted, but the results are identical.

So there you have it.  Go check out your AWS RDS SQL Error Logs!  See what tickles your curiosity.  =)

T-SQL Tuesday #104: Code You’d Hate to Live Without

And that’s where we could use a little encouragement, I think — another DBA saying “Yay, it’s not just me!” makes it all worthwhile.

It’s that time of the month again!  Bert‘s fantastic invitation complete with YouTube vid is tempting indeed.  There are so many wonderful community scripts and tools for SQL Server DBAs.  But, in an interesting twist, he specifically asks us about something self-written or self-created.  And I’ll admit, I sometimes have trouble being exceptionally ‘proud’ of my home-brewed stuff.  Sure, I’ve blogged and GitHub‘d, and hopefully those have helped someone along the way.  Most of the time, though, those are specifically suited to a given use-case.

The more difficult code to share, I think, is the stuff that we keep in our “daily-grind” “Get-Stuff-Done” folders.  Scripts that we’ve tweaked and commented sporadically throughout the years, finding this edge-case or that tweak for a given scenario, most of which ends up being pretty environment-specific.  And that’s where we could use a little encouragement, I think — another DBA saying “Hey, it’s not just me!”, or “I knew there was another guy/gal out there who always has to do this kind of thing!”.  Thus, here I will attempt to show off something along those lines and see what you kind folks think.

Where Does it Hurt?

No, I don’t mean the famous wait-stats queries by Paul Randal (those are awesome).  I mean, what are the top few areas of your SQL environment where you’re always questioning something, or being asked to find some answer that’s not immediately obvious?  For me, there are 3.

Replication

Transactional replication is, in a word, ‘brittle’.  It works well when it’s working, and usually you don’t have to think about it.  But when you do need to check on it, or change anything about it (god forbid), or troubleshoot it (more like shoot it, amirite?), it feels a bit like trying to mess with a half-played Jenga stack.  Meaning, you might be just fine, but you might send the whole thing crashing down and have to rebuild it all.

peanut-brittle-from-the-joy-of-baking
As brittle as this stuff. But not nearly as delicious.

I won’t go into the whole troubleshooting aspect here, that’s too much scope.  But there’s a question that can often come up, especially from Devs or Biz-Analysts, and that is: “Hey, is TableX being replicated?”  And rather than subject my poor eyeballs to the replication properties GUI, I just run a little query, which reads from the distribution database and the actual database that’s being published (the ‘publisher’ db), and tells me a list of tables & columns that are being replicated.

Here it is.  Be sure to replace [dbName] with your actual published DB name.  Like StackOverflow or WideWorldImporters, or AdventureWorks <shudder>.

Report Subscriptions (SSRS)

Another question I’m often asked is, “Hey, did ReportX run?”  What they really mean is, “Did ReportX‘s email subscription get sent to BigWigUserY?”  We have an unholy amount of SSRS reports with email subscriptions.  And because I don’t care to bloat my inbox by BCC’ing myself with every single one, I rely on the users to speak up when they don’t receive something they’re expecting.

“This is a terrible idea.”, you say.  “Never trust the users!”

Yes, well, such is life.  If you have a better lazier solution I’m all ears.

So here’s a little script I wrote to query the ReportServer database (which MS will tell you is “officially unsupported”, snore).  If I know some keyword from the report title, or the supposed recipient’s email address, this will tell me if it ran successfully or not — LastRun, LastStatus.  A couple other useful bits: where it lives, ReportPath, and what its SQL Agent Job’s Name is, JobName.

That last bit is peculiar.  The JobName looks like a GUID (because it is, because SSRS just loves GUIDs), but it’s also the actual name of the agent job, which you can use to re-run said job — via exec msdb.dbo.sp_start_job — if the failure wasn’t systemic.  As I often do.

Disk Space

Last but not least, everybody’s favorite topic to try and forget about or pawn-off to the SysAdmins.  “How much space are those databases / data files / log files eating up?”  Well, mister suddenly-cares-about-disk-space-but-is-OK-with-storing-all-domain-users’-iTunes-music-libraries-on-the-central-fileshare-along-with-their-documents-because-that’s-what-we’ve-always-done.  {True story.}  Let me tell you.

keep calm and release the bitter
It’s healthy, I swear!

This script has a lot of comments and commented-out lines because I will tweak it depending on what I need to see.  Sometimes it’s “show me the DBs where the logical filename doesn’t follow my preferred pattern” (the DB name with ‘_data’ or ‘_log’ after it); or perhaps “Only show me files over 5GB with a lot of free space” when I’m itching to shrink something.

“Never shrink your files!  (In production!)”

Yes, thank you, knee-jerk reactionary.  I’m dealing with servers in lower environments, usually, with this one.  😉

What do you think?

I’d love to hear your feedback in the comments!  Happy Tuesday!!  =)

Credit where credit is due.

I did not magically come up with these all by myself.  They’re pieced together from many a StackOverflow answer and/or other community blog post or contribution that I’ve since forgotten.  I usually store a link to the source in these kind of things, when it’s true copy-pasta, but in these cases, I added enough of my own tweaks & style that I no longer tracked the original linkage.  If you see anything that looks familiar, please do tell me where to give kudos and shout-outs!  😀

Quickie: Use of APPLY Operator

There are many great use-cases for the T-SQL APPLY operator, and if you haven’t tried it yet, I encourage you to check out Kevin Feasel’s excellent presentation on it here.  Today, I want to demonstrate a particularly interesting use-case (at least, to me!) that I found for it while tackling a real business problem — for reporting, specifically, which is what I spend a lot of my time dealing with lately.

The Setup

We have a “history of estimated market values” table, which stores, for some arbitrary dates based largely on user input, values (prices) of our proverbial Widgets.  In other words, we can’t guarantee we have every date in there for a given time period.  Over on side B, we have a “sales” table, which tells us when a given Widget was sold and for how much.  The business question, simply enough, is “When a given Widget was sold, how accurate was its ‘estimated market value’ at the time?”  Or, put another way, “How closely did the sale price of that Widget match our estimated market value?”

we don't make widgets
Yeah, yeah, we’re all tired of the Widget business. If only we could come up with a better term for a generic product to use in discussions of business problem samples.

The Tools

I used two interesting bits of TSQL to accomplish this, the main one being our lovely APPLY operator.  OUTER APPLY, in this case, because I still wanted the rows from “sales” even if we couldn’t find a closely-matching “market value” record.

If you haven’t used this operator yet, think of it conceptually like a JOIN to a sub-query, but with the added value that you can reference the outer tables’ columns within it.  CROSS APPLY is like INNER JOIN, and OUTER APPLY is like LEFT JOIN.

Sorry, RIGHT JOIN fans, no love for you here.  You’re just too weird.

My other little trick involved using TOP(1) in the apply’d sub-query, with a non-standard ORDER BY clause.  As I mentioned, within the APPLY‘s body, you can reference the outer table’s columns.  So I’m easily able to compare the SaleDate (outer table) with the EstimateDate (inner query).  I want “the closest EstimateDate to the SaleDate​”, which means I want the row where the difference between those two dates is the smallest.  Which means making use of our friend DATEDIFF.  So let’s try:

ORDER BY DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)

Do you see the problem yet?  If I get a negative value, i.e. my SaleDate is way before my EstimateDate, say -100 days, that’s the top 1 row.  I don’t want that!  Let’s try our old friend from many many math classes ago, Mr. Absolute Value.

ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)

That’s better!  Now I have the top 1 “closest to my sale date” row from MktValueHist.

All Together Now

I’ll put up a Gist with repro/demo code soon.  Here’s the meat of it, as a sample select, for convenience:

SELECT Sales.WidgetID, Sales.WidgetName, Sales.Price, Sales.SaleDate
, mvh.MarketValue, mvh.EstimateDate
, Accuracy = some_made_up_mathy_thing_here
FROM Sales
OUTER APPLY (
SELECT TOP(1) mValHist.MarketValue, mValHist.EstimateDate
FROM MarketValueHistory mValHist
WHERE mValHist.WidgetID = Sales.WidgetID
ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, mValHist.EstimateDate)
) mvh
WHERE Sales.SaleDate >= '20180101'

There, my completely fabricated yet totally-based-in-reality example of how to get 2018’s Widget Sale Prices with corresponding “closest to the sale-date” Market Value Estimate from our history table.  You could even throw in some fancy math expression for “accuracy”, if you felt like it.  Like maybe “relative difference“.  =)

Note: For simplicity, we’re dealing with “whole dates”, i.e. just date.  Not datetime or datetime2.  If your date values do include times, you’d want to change the datediff interval to something more appropriate, like second or millisecond.

cross apply explained vs inner join and correlated-subquery
Forgive the graininess, it was from a Youtube screen-cap.  In an Oracle presentation, of all things. And I feel like they meant to say “Data exposition“, not “explosion”, given the latter can have negative connotation.  But.. hey, it’s better than a poorly drawn stick-figure or an inappropriate meme, right?  RIGHT?

And that’s how APPLY and ABS() saved my bacon today.  Hooray!

Have an interesting use-case for APPLY?  Or perhaps an alternative approach to a similar problem?  I’d love to hear from you!  Drop me a comment, a tweet, or a LinkedIn msg.  Happy summer!


PS: I’d also recommend checking out Steve Stedman’s useful “Join types poster“, if for no other reason than to have something concrete to use when explaining those concepts to your fellow cube-dwellers.  It has cross/outer apply on the 2nd page in their more commonly used pattern, the TVF (table valued function) relationship.

PPS: Fine, have your meme…

apply yourself, from breaking bad
I’ve never seen the show, so I don’t get it, but it was one of the first Google Image results for “apply yourself meme”. Enjoy that.

A SQL “Whodunnit” Trigger

Triggers aren’t bad, if used for the right reasons.. Here we look at an “audit-trail” use-case.

Inspired by a brief conversation in the #CodingBlocks community Slack: A short discussion and example of a “who-dunnit” (“who done it”, a colloquialism for a murder-mystery type thing) trigger, to find how what user is doing deletions against a certain table.

the cast of CSI LV
The original, or nothing.

The Background Check

Let’s name our hypothetical database CSI.  In it, we have a table, dbo.Victims, where it seems like data is being randomly deleted at random times.  As we all know, this is impossible — computers never do anything truly randomly, much less RDBMSes.

Insert witty counter-example here.  You know you have one.

So we want to find out who’s doing these deletions.  One DBA says, “Hey I got an idea… Let’s put an after delete trigger on the table!”  Another DBA says “I abhor triggers; let’s log sp_WhoIsActive every 5 seconds to try to catch the suspect ‘in-the-act’.”

Both approaches have their merits, and neither is that uncommon.  However, the latter is much more regularly blogged about, so I’m going to present the former, because it kinda helped remind me of a few things that I hadn’t used in a while.  I’d also argue that the latter is much less of a “guaranteed capture”, since you’re gambling pretty liberally on the fact that the delete transaction will even last that long; it’s statistically more likely that you’ll miss it.

The Setup

Here’s a SQL snippet that shows a basic after delete trigger created on our dbo.Victims table.  Notice the use of the special Deleted table reference — this is a “temporary, memory-resident” table according to the Docs, and it holds all the records that were/are-about-to-be deleted from the target table.

I feel like it used be called a “temporal table”, but that now refers to a new feature in 2016, where SQL keeps a hidden history-tracking copy of your table that you can reference like a time-machine; which, incidentally, almost* negates the need for such things as these triggers we’re talking about, but that’s another topic for another time.

*(The ‘almost’ is because temporal tables don’t tell you “WHO”, which is our primary motivator here.)

The interesting bits are how we identify our suspect, our ‘killer’ if you will.  See, we not only want to know who they are in the database context, we also (and likely, more importantly) want to know who they are at the server level context.  And just in case they’re impersonating another login, we want to check that too.

So we actually have a lot of options here.  There’s CURRENT_USER or USER_NAME(), for the DB context user.  Then we have SUSER_SNAME(), SUSER_NAME(), SYSTEM_USER, and ORIGINAL_LOGIN() for the server context.  If you’re curious, you could also get things like @@SPID (server session id), SUSER_ID() (server login id), and SESSION_USER (database session user).

ORIGINAL_LOGIN() may be the most potentially interesting, especially if we want to write our trigger with elevated (impersonated) permissions to be able to write to the logging table that we’ve set up to capture its detective-work.  I did not need it for this example, but it’s worth keeping in mind.

why dont you take a seat over there
We’ve been watching you…

The Sting

So we’ve got our evidence table, we’ve got our detective trigger, now we just need a suspect.  Thankfully we can test it out first, to make sure our operation will succeed when the real perp comes along.  We can do this, of course, by impersonation.  Or by using different SSMS query-windows with different logins — your choice.

Our faux-suspect’s login name is DummySuspect.  We map him to the db_datawriter and db_datareader roles in our CSI database — we know the real perp at least has write permission on the table dbo.Victims, otherwise he/she wouldn’t be able to delete those poor victim rows!  And we’re probably the db_owner, which is fine.  Let’s call our own login SergeantX.

Now we can pretend to be DummySuspect and execute a DELETE against CSI.dbo.Victims , and make sure it writes to our auditing table, which we called  aud.Evidence.

Yes, in practice, we’d probably want to put our Evidence table in a separate database, to really ensure those pesky Suspects can’t update it or delete from it, i.e. “cover their tracks” — here, I’ve settled for simply using a different schema, to keep the example workable.  Otherwise we’d have to deal with cross-DB permissions and such, which goes beyond the scope of one little blog post.

Ready?  Let’s try it!

The Proof

Go on over to the GitHub repo and check out the code.  There are 3 easy steps – ‘Step 1’, create the objects, including a new DB to house them called CSI.  You’ll see the trigger in there as well.  Then you can try ‘Step 2’, where I impersonate DummySuspect and delete a row from Victims, and then check the Evidence log when done.  And finally, ‘Step 3’ is a similar test, but assumes that you’ve actually connected that SSMS window/tab/query AS DummySuspect instead of impersonating him (or her!).  After you’ve done that, check out aud.Evidence again to make sure it logged the 2nd delete.

And there you have it.  A simple example of how to write and test an after delete trigger that writes the action info to a separate auditing table for investigation.

Hope you enjoyed!  Leave a comment here or on GitHub; I welcome all feedback.

One more thing…

Apparently I’ve been fork‘d!  Someone on GitHub liked my take on the Nested Set Model so much that they decided to pull it into their own library and have a play with it.  Yay!!  😀   Whoever you are, thank you and good luck building with it.  Enjoy!

someone forked my sql nested set model on github
Mr. Murray, thanks and have fun!

T-SQL Tuesday #99: Counting Rows the Less-Hard-Way

We can get our row count, and min & max date values, without ever touching the actual source table!

This month’s invite courtesy of Aaron Bertrand (B | T), whose “bad habits” blog series still inspires many an impassioned debate or engaging argument discussion on a regular basis among DBAs & Developers alike.

And yes, I’m taking the easier of the two “dealer’s choice” choices — the SQL focused one.  (I’m not big on sharing/blogging personal stuff, at least not here; I may one day start another blog for that, or perhaps just occasionally post more #off-topic stuff , but for now you’ll have to be content with my stories of vehicle troubles and the occasional movie-geekery).

So, without further ado…

By the way, what is ‘ado’ and why should there be no further of it?

art-vandelay-importer-exporter
Accidentally apropos on many levels…

Counting Rows in Really Big Tables

Previously touched on here, tables of unusual size (TOUSes) can be tricky.  You don’t want to lock them up for a long period of time, but you often need to gather information about them (such as row count, size, range of values) to perform some kind of operational maintenance with/on them.  In particular, Aaron’s post on “counting rows the hard way” inspired me to look into this a bit more and try to come up with a clever-ish way of finding out some basic “shape of data” info without actually querying (scanning) the table itself.

To start with, it’s actually really simple to get the total row-count from a few system catalog views — Aaron’s already shown you that, so I won’t repeat.  My interest is more in questions like “How many rows match a where-clause?” or “What are the min & max values for thatColumn?”

For this post, I’ll be focusing on a particular kind of table — the “history” or “transaction” table.  The idea here is that you have a record of “every time some event happens in/to some entity”.  A very common example is audit-trail tables, which I’ve been dealing a lot with lately.  Another common example is a “transaction history” table, such as, in our new favorite MSSQL demo database WideWorldImporters, the table Warehouse.StockItemTransaction​.  It’s the 2nd largest table in the db at 260-some-thousand rows.  {The largest is a multi-million-row beast that is actually the system-versioned aka temporal table behind a “normal” table; I might build a phase-2 example around this, but not today.}  So, while our queries won’t be super slow, we’ll get enough of an idea of what’s bad & good from measuring our IO stats (with SET STATISTICS IO ON).

TL;DR: The demo script is available here; the headers below correspond to the comment-lines of the same name, but I’ve left enough commentary in the SQL itself to keep the average reader on-track, so feel free to check it out ahead of time.  But do keep reading at some point!  :o)

A. Gathering Some Intel

First up, we have good ol’ sys.sp_spaceused.  This gives you some sizing info about the table, including its row count.  There’s a “disclaimer” circulating around out there that it’s not “up to the millisecond accurate” , i.e. it might not have the most current row count if someone else is in the middle of an insert operation or whatnot.  But for all intents & purposes, you can consider it truth.

Then you have the “hard ways” that people typically use — and that Aaron, again, covered just fine in his post on the subject, so I won’t spend any more time there.

But what if I want to count rows matching a where clause (a predicate)?  And in dealing with a typical history/transaction table, the predicate is almost always “between such and such dates”.  (Of course we won’t literally use the between operator, as we have been chastised severely; we know best to use >= and < !)  Also, I want to know the MIN and MAX of said dates in the table.  Lord knows we’re gonna be doing some table-scanning.

B. Ok, Let’s Try an Index

In their benevolent wisdom, the SQL deities decided not to give us an index on WideWorldImporters.Warehouse.StockItemTransactions.TransactionOccurredWhen.

BTW, how’s that for a verbose column name?  What, TransactionDate not good enough?  I suppose it isdatetime2 after all, but still…

So we create an index on it, to see if that helps our poor “count where dates” query.  And behold, it does!  We’ve cut our # of logical reads down by about 90% (from 1900 to 200, if you’re following along in the script).  That’s fantastic, but… we can do better.  Because if the table is, say, 500 million rows instead of 260k, that’s about 400,000 logical reads, which.. could definitely suck.

C. The Better Way

Again, the script has an ode to Aaron’s query on sys.partitions/tables to get the row-count from the meta-data.  Then the real fun begins.

There’s a system DMV (or probably ‘DMF‘ – dynamic management function) called sys.dm_db_stats_histogram, which takes the table’s object_id and the index’s index_id as arguments.  It gives you, obviously enough, the statistics histogram of the statistics object corresponding to that index.  We want to store its output in a temp-table (or even a real table — go nuts!) so we can query it some more.

--For example, if our new index is index_id 7:
sys.dm_db_stats_histogram(OBJECT_ID('Warehouse.StockItemTransactions'), 7)

So we create our #StatsHist table (“hist” being an abbreviation for “histogram”, not “history”, though in retrospect that’s probably not worth the possible confusion), and we populate it with the meta-data from Warehouse.StockItemTransactions and its new index that we just created (on TransactionOccurredWhen).  Poof!  We have an easy way of showing min/max values in that column!  Well… almost.  We have to convert the variant datatype to an understandable & aggregate-able (probably a made-up word.. aggregable? aggregatable?) type.  So we add a column range_hk_proper of type datetime2 and populate it with the converted values of range_high_key from the stats-output.

There!  Now we’re cookin’ with gas.  Our min/max/count query, and our “count where date-range” query, run in mere milliseconds, without ever touching the actual source table.  So we don’t lock it up or block anybody else from writing to it, even in the most pessimistic isolation levels.

Except when you created that index we needed on the date column.

Yes, I know.  What we’re hoping is that the tables we deal with in the “real world” already have such an index that we can take advantage of.  If not, well, that’s what maintenance windows are for.  And you better believe you’re gonna need that index sooner or later.

cooking-with-mustard-gas
Been a while since I used a Family Guy meme….

Where To Next?

Ostensibly, this whole thing could probably be turned into a stored-proc so you could run it “on demand” for any table that had a date or datetime column which you wanted to get such information about.  It’d have to do a lot of error-checking, of course — it wouldn’t work if you don’t have such a column, and if there’s no index on it, and probably a myriad of other ‘gotchas’ that I’m not thinking of at the moment.  But I did try to lay the groundwork for improvement. #StatsHist stores schema & table name too, so if you felt like turning it into a mini-data-warehouse holding a BUNCH of stat-histograms for a whole mess of tables, you could definitely do that.  And then you could run some basic analytics on it — min/max/avg, counts by year/month/day, etc.

Sounds like fun, no?  ;o)

T-SQL Tuesday #98: Orphaned Users Redux

It’s that time again!  The first #Tsql2sday of 2018.  Thanks to the Blobeater for this month’s invite: “your technical challenges conquered”.

Because I’m already ridiculously late, I have a short one.  This is about orphaned users — you know, when you restore a database and its users aren’t mapped to the server logins that they should be or used to be.

orphan-movie-poster
Not that kind of orphan… pretty decent movie tho!

The typical solution is sp_change_users_login with the auto_fix or update_one option.  But guess what?  Yep, that’s deprecated.  By the way, did you know that it also has a report option?  Apparently that’s got some bugs…ish?  Try it sometime and see — compare the output of sys.sp_helpuser where the ‘LoginName’ column is null, with sp_change_users_login 'report'.  Preferably on a DB you’ve restored from another server.  😉

So what’s the correct solution?  ALTER USER [theUser] WITH LOGIN = [theLogin].  Simple, no?  Let’s get more general.  Could we come up with a half-decent way do apply this kind of fix dynamically?  Well sure, the nice folks at DBATools have already solved that problem.  And that’s great, really.  But just in case that doesn’t work… ^_^

One of the many things I love about SQL Prompt is the right-click option to “Script as INSERT” (from the results grid).  This is a quick & easy way to built a temp-table for the results of an exec statement so you can do the ol’ insert #tmp exec sys.sp_blah !  Then we can query the list of DB users for the null LoginNames and write a little set of queries to fix them!  Sound good?

UPDATE: Behold the code!

Happy Tuesday!

PS: Coincidentally, today’s (Thursday) SQL Server Central newsletter featured a very similar post by a gentleman over at Madeira Data.  Go check it out, it’s another great solution to this problem!  And while you’re at it, get the SQL Server Radio podcast (created by a couple guys from the same company) – it’s a terrific addition to your iTunes library.