Dates, Date-pickers, and the Devil

When a date range, or time period, is specified in SQL, it’s easiest, clearest, and most concise, to use a “greater-than-or-equal-to Period-Start, and less-than Next-Period-Start” logic. Mathematically speaking, we are defining the range as closed on the left, open on the right.

This is a bit rant-y, but… indulge me.  I’ve been writing/refactoring a lot of old reporting queries.  And, like most reports, they deal with dates and datetimes — as parameters, boundaries, or where/join predicates.  I also got way too intense with a recent SSC post (Sql Server Central), which fueled the fire even more.

fluffy-angry-puppy
I’m so cute and ANGRY!

SQL Server is very good at handling temporal datatypes and calculations against them.  We’ve got functions like dateadd, datediff, dateparts, datatypes datetime2 and datetimeoffset, datetime, etc.  It supports all sorts of format conversions if you need to display them in various ways.

..even though that should be left to the presentation layer!

Here’s the issue.  Well, there are several issues, but we only have time for a few.

Here’s the first problem

Report users don’t understand the “end of a time period” problem.  I don’t have a good name for it; others might call it the “Day plus one” problem or the “Less than date” problem.  What do I mean by this?  Well, let’s back up a bit, to DBA Commandment #6, “Thou shalt not use between with datetimes.”  In order to first understand the issue, we have to understand why this is a commandment.

When a date range, or time period, is specified in SQL, it’s easiest, clearest, and most concise, to specify it like so: @TheDate >= @StartOfPeriod and @TheDate < @StartOfNextPeriod.  Mathematically speaking, we’re defining the range as “closed on the left, open on the right”.  In other words, Min <= X < Max.

The reason we do this with datetimes is found right there in the name of the datatype — it has (or can have) a time component!

stone-tablets-with-roman-numerals-to-10
There are probably more than 10, but it’s a good starting point…

Let’s talk examples

Say you’d like to report on the month of March 2017.  How do you determine if your data-points (stored as datetime or, hopefully, datetime2) are within that period, that month?  Well sure, you could write where month(MyDateColumn) = 3 and year(myDateColumn) = 2017 

NO.  That is horrible, don’t do that.

It’s not SARGable and renders your index on that column useless.  (You do have an index on it, don’t you? No? Make one!)  Okay, let’s stick with something SARGable.  How about MyDateColumn between '20170301' and '2017-03-31T23:59:55.999'?  (You did read this post about using culture-neutral datetime literals right?)  But wait!  If your data is a datetime, it’s not actually that precise — your literal gets rounded up to 20170401 and you’re now including dates from April 1st (at midnight)!

Oh that’ll never happen… until it does.

Second problem

Many developers and report-writers assume that the values in their data will never be within the typical “1 second before midnight” or “1/300th of a second before midnight” escape window of your “3/31/2017 23:59:59.997” bounding value.  But can you guarantee that?  Didn’t think so.  Worse, if you use the .999 fraction as given in the 2nd example, you’re either “more” or “less” correct, and nobody can actually tell you which way that pendulum swings because it depends on the statistical likelihood of your data having actual literal “midnight” values vs. realistic (millisecond-y, aka “continuous”) values.  Sure, if you’re storing just a date, these things become a lot less complicated and more predictable.

But then why aren’t you storing it as an actual date, not a datetime!?

So what’s the right answer?

As I said, “greater than or equal to  ‘Start’, and less than ‘End'”, where ‘End’ is the day after the end of the period, at midnight (no later!).  Hence, MyDateColumn >= '20170301' and MyDateColumn < '20170401'.  Simple, yes?

keep calm and keep it simple
KCKS

But wait, there’s more!

I mentioned “date-pickers” in the title.  When it comes to UX, date-pickers are a sore subject, and rightly so — it’s difficult to truly “get it right”.  On a “desktop-ish” device (i.e. something with a keyboard), it may be easiest on the user to give them a simple text-box which can handle various formats and interpret them intelligently — this is what SSRS does.  But on mobile devices, you often see those “spinner” controls, which is a pain in the arse when you have to select, say, your birth date and the “Year” spinner starts at 2017.  #StopIt

I mean, I’m not that old, but spinning thru a few decades is still slower than just typing 4 digits on my keyboard — especially if your input-box is smart enough to flip my keyboard into “numeric only” mode.

Another seemingly popular date-picker UX is the “calendar control”.  Oh gawd.  It’s horrible!  Clicking thru pages and pages of months to find and click (tap?) on an itty bitty day box, only to realize “Oh crap, that was the wrong year… ok let me go back.. click, click, tap..” ad-nauseum.

stop-it-sign
#StopIt again

The point here is, use the type of date-picker that’s right for the context.  If it’s meant to be a date within a few days/weeks of today, past/future — OK, spinner or calendar is probably fine.  If it’s a birth date or something that could reasonably be several years in the past or future, just give me a damn box.  (Heck, I’ll take a series of 3 boxes, M/D/Y or Y/M/D, as long as they’re labeled and don’t break when I omit the leading-zero from a single-digit month #!)  If there’s extra pre-validation logic that “blocks out” certain dates (think bill-payer calendars or Disneyland annual-pass blackout-days), that probably needs to be a calendar too.

..just make sure it’s responsive on a mobile device.

And in all cases, pass that “ending date” to your SQL queries in a consistent, logical, sensible manner.  For reporting, where the smallest increment of a period is 1 day, that probably means automagically “adding 1 day” to their given end-date, because the end-user tends to think in those terms.  I.e. if I say “show me my bank activity from 1/1/2017 to 1/31/2017”, I really mean “through the end of the month“, i.e. the end of the day of 1/31.  So your query is going to end up wanting the end-date parameter to be 2/1/2017, because it’s using the correct & consistent “greater than or equal to start, and less than start-of-next” logic.

context-consistency-clarity
The 3 C’s

Final thoughts

I know it’s not easy to explain to business folks, and it’s not easy to implement correctly.  But it’s important.  The >= & < logic is clear, concise, and can be used consistently regardless of underlying datatype.  You just need to adjust your presentation layer (whether that’s SSRS parameters or a .NET date-picker) to convey their intent to the user, whether that’s “show/enter the last day of the month, but translate to the next day to feed to the query/proc.”, or “make them enter the next-day (day after the end of the month/period) and understand the ‘less than’ logic.”  I’m more inclined to the first, but it depends on your audience.

Thanks for reading, and happy date-ing!

Reverse Engineering SSAS Reports

MDX is not SQL. It may look like it has SELECT/FROM/WHERE clauses, but god help you if you start drawing parallels to your standard TSQL query.

This is an exercise I had to go through recently, because A) the reports in question were deployed in SSRS but used an SSAS backing, i.e. cubes, and the source queries (MDX) were not stored in source-control, and B) I don’t write MDX queries.

a basic MDX query example with labeled parts
shamelessly borrowed from an excellent article

The outline:

  1. Run Profiler or XEvents against the SSAS server
    1. set to capture “Query Begin” events only, with “Event Subtype = 0” (for MDX query)
    2. optionally, set filter on NTUserName to the dedicated SSRS account (if you have it set up that way)
  2. Run the SSRS report(s) that you want to dive into
  3. For each event in the Trace, copy-paste the MDX query to a new MDX editor window
  4. SSRS parameter substitution happens via some XML at the bottom; but in MDX, the parameters are standard @params like in T-SQL.  So we need to manually substitute our parameter values.
    1. 2 blocks of XML: the “Parameters”, and the “PropertyList” — delete the latter.
    2. In the former, text-replace &amp; for simply & .
    3. Side-bar: You’ll notice that the MDX parameters are usually inside STRTOMEMBER() or STRTOSET(), which are built-in MDX functions that do exactly what they sound like — parse a string into a dimension’s attribute’s member or set of members.  That’s why they’ll usually have at least 3 .‘s (dots) — Dimension.Attribute.&MemberValue, for example.  I’m grossly oversimplifying that because it’s beyond the scope of this post, but read the docs if you need more gritty details.
  5. For each parameter node:
    1. Copy/cut the <Value> node content (I like to ‘cut’ because it helps me keep track of which ones I’ve done already)
    2. Find-and-Replace @ParameterName with that Value node’s content, surrounded in single-quotes
    3. Example: we have parameter ​@ReportDate (in MDX), corresponding to <Parameter><Name>ReportDate</Name> in XML, with <Value xsi:type="xsd:string">[Some Dimension].[Some Attribute].[Some sub-attribute].&[2017-05-01T00:00:00]</Value> — where that last bit is a standard SQL datetime literal.
    4. So you replace @ReportDate with '[Some Dimension].[Some Attribute].[Some sub-attribute].&[2017-05-01T00:00:00]' .
  6. Delete the XML block.
  7. Boom, now you have a valid MDX query that you can run and view results.

Why do this?  Well, it can help you learn MDX from a working example, instead of from super-basic dummy examples.  That’s not always a good learning style — you should still learn the fundamentals of MDX and why it’s so very different from SQL.  Especially if you’ll be responsible for writing and maintaining more than a few MDX queries.  But, in a pinch, if you need to start somewhere, and possibly all that the MDX / overlaying report needs is a slight tweak, this may be enough to get you going.

Lessons learned:

  • Profiler can still be a useful tool, despite some people’s attempts to kill it.
  • MDX is not SQL.  It may look like it has select, from, and where clauses, but god help you if you start drawing parallels to your standard TSQL query.
  • SSRS does parameter-passing in an odd way.
  • SSAS & MDX are fascinating and I need to learn more about them!
the-more-you-know
Canadians, eh?

Quickie: SQL DB Role Members

A typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for…

Just a brief post on adding/removing users (database level users) to/from roles (database level roles).  It’s relevant because several shops are still stuck supporting at least a few 2008 (or hopefully, 2008R2) instances, and there is a key difference between those and newer (2012 & up) versions in the “preferred” method of doing this security task.

security seal for ur protection
He’s wearing a police hat, he must know what he’s doing…
There are reams of documentation and books and articles written about SQL security in general.  That is beyond the scope of this post (and indeed, beyond the scope of any single blog, unless you’re an SME on the subject!).  But a typical part of a DBA’s work-week might involve the occasional DB user-role-membership management, so I hope this helps the lone-wolf DBAs out there and/or the developers who need to know what to ask for, when they’re planning/deploying a new app against their SQL DB(s).

The “old” method involves calling system stored-procedures, sp_addrolemember and sp_droprolemember, in which you pass the role-name and username.  The “new” method, supported starting with SQL 2012, is to use the command-phrases ALTER ROLE [role] ADD MEMBER [user], and ALTER ROLE [role] DROP MEMBER [user].

The latter is more ‘standard‘, while the former is more ‘Microsoft-y‘.  I couldn’t easily find whether it’s part of the official ANSI standard or not… that’s an exercise for the reader.  What I find very interesting is that Azure’s data warehouse offerings require the old method.  Of course, hopefully in a DW setting you’re not messing with security nearly as much as a typical OLTP system, but… yeah.

Does that mean those Azure services are built on top of older SQL engine versions?  Possibly.  MSFT isn’t too open about the deep internals of such tech, but neither is any other cloud vendor, so we can’t really ask them such a question and expect anything more than a blank-stare.  But it is curious, no?

fry not sure if curious or suspicious
Exactly.
Syntax examples:  Let’s add the user foo to the database Bard, in the db_datareader built-in role.  Then we’ll remove him.  (Or her, I guess; “foo” is a pretty gender-neutral name.)  Creating said user is easy, so I’ll start with that, and it’s the same in all supported versions.  You need a server-level login to link it to; if you don’t have one, I’ll show you how to create it first.

Create server-level login:

--preferably, you create a login for an existing AD/Windows account:
CREATE LOGIN [yourdomain\foo] FROM WINDOWS;
--or, you can just create a SQL login (not connected to domain/Windows/ActiveDirectory; also less secure, as discussed here and here)
CREATE LOGIN [foo] WITH PASSWORD = 'foobar';

Create database-level user:

USE Bard;
--if you made the domain/Windows login:
CREATE USER [foo] FOR LOGIN [yourdomain\foo];
--or, if you just made the SQL login:
CREATE USER [foo] FOR LOGIN [foo];

Now the role-membership.

Old way:

  1. Add user to role:
    • exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';
  2. Check that it worked:
    • exec Bard.sys.sp_helprolemember
          @rolename = 'db_datareader'
    • It will show something like this:
      db_datareader with member 'foo'
  3. Remove user from role:
    1. exec Bard.sys.sp_addrolemember
          @rolename = 'db_datareader'
          , @membername = 'foo';

New way (step 2, the “check”, is the same)

  1. Add user to role:
    • USE Bard;
      ALTER ROLE db_datareader ADD MEMBER [foo];
  2. Check (see above)
  3. Remove user from role:
    • USE Bard;
      ALTER ROLE db_datareader DROP MEMBER [foo];

Yay.

Notice that, because the “old way” is simply executing sys-sp’s, we can actually run it from any database context.  Whereas the “new way” requires you to connect to the database in question.

Note: I am in no way shape or form responsible for you screwing up your database or SQL instance, nor for you getting yelled at by your DBA or security admin or any other form of verbal assault you may incur as a result of running these commands.  But since you need server-admin & database-owner equivalent permissions anyway, you’re probably one of those people already, so you’ll just end up yelling at yourself.

no guarantees
No substitutions, exchanges, or refunds.
Cleanup (just so you don’t muddy your instance/DB up with a silly example user):

USE Bard;
DROP USER [foo];
USE master;
DROP LOGIN [foo];

If you have any questions, feel free to reach out to me!

 

How to Total Cars for Fun and Profit

Insurance is a wonderful thing…

This is a story.  And it’s longer than my usual post, so get comfy.  I may be stretching my own rules, but I swear, I’ll tie it back to databases… somehow! Let’s get started.

Back in December 2016, I was in an accident in my 2011 Mazda 3. Ironically, I was driving home from filling up with gas, plus I’d just had some maintenance done the month before. These things are ironic because the car was a total loss. “Totaled”, in layman’s terms. It means the damage was such that the insurance company would rather pay off the market value of the car, than pay for the repairs. Or, put another way, it means that the cost of repairs would be within nominal range of the vehicle’s value. Short version, I’m not getting the car back. Oh, and that gas fill-up and mechanic bill? Money down the drain.

dumping-money-down-the-toilet
I haz a bucket…

FYI, I was just fine – so the car and its safety-system did its job, protecting me from harm. Safety is an important part of choosing a car, kids… remember that.

safety-first

Anyway, the car gets towed off to a local yard, I have a day or so to collect my possessions from it, and then they tow it somewhere else to the “salvage yard”, where it becomes somebody else’s property and problem. Insurance sends me the check for the value of the car. I was actually worried that it wouldn’t be enough to cover the loan balance, because I was still making payments AND I’d refinanced midway thru the original loan term, lowering the interest rate and payment but also extending the term. But, thankfully, the car’s value was above the loan’s balance, so I was able to fully pay it off and still pocket some cash.

Unfortunately, I didn’t have “loss of use coverage” on the policy – meaning, no free rental car. So I pay for it, for a little while. Fortunately, around the same time, my parents were getting ready to dump their old 2000 Honda Accord for a newer one, so we started talking and they offered to us as a gift. (Did I ever mention how awesome my parents are? They are!) Excellent.

Actually, this is the same car that I used to drive around in high school, so it’s got some memories.

Including, of all things, my very first accident! You’ll see why this is ironic in a few paragraphs.

irony-alert
Someone call Ms. Morissette…

Now, this car is what we call a “beater”. It’s 17 years old, it’s been through the wringer, it’s got 190k miles on it; but hey, it’s a freakin’ Honda. It’ll last another 50k at least, if maintained properly. And it has been – faithful oil changes, scheduled maintenance and beyond. But it’s not the safest vehicle on the road; the e-brake light comes on sporadically, and the airbag warning light is always on, so we don’t actually know if the airbags (particularly the passenger side) will work. So we need to start shopping for a new vehicle, at least for the wife.

I won’t go into car-shopping here, it’s a pain the arse unless you use courtesy buying services like those offered thru your credit-union or your some kind of “club membership” or whatever. Long story short, we got a 2017 Hyundai Elantra, in black, not brand-new but used with only 3k miles on it (apparently they had buyer’s remorse).

black-hyundai-elantra
This is “Tigress”, or “Tigz” for short.

I will digress just for a second about black cars. They look pretty slick, even though they do show dirt a bit more than gray/silver (which is what the Mazda was). But you know what makes them look super-duper slick? Those “legacy” CA gold-on-black license plates. I convinced myself that I had to get those. Until I went to the DMV and found out that they’re $40 initially plus an extra $50/year on your registration fees. Jesus H… I get that they need to make money, but that’s ridiculous. Srsly. Ding the people that want those silly vanity plates, because I understand it adds a lot of processing/tracking overhead and makes the data (see? I told you I’d tie it back!) more complex. But this is the same old metal made by the same old prison inmates with a different coat of paint. Don’t pretend it actually costs anything extra for you to make them and pass them out. Anyway. Back to the story.

ca-plates-regular-vs-legacy
because it’s SOOO much harder to make black metal than white metal…

So we get the car, the wife’s driving it home. She’s heat-sensitive, and it’s been a pretty long, warm day. She ends up passing out for a second and veering off the road to the right shoulder, which is a small dirt embankment into some bushes and trees. Fortunately enough, she realizes what is happening and she’s able to bring the car to a gentle stop without actually hitting anything. So the car’s only real damage is some scrapes & bruises on the front end, a bit of scratching on the sides, and some dings to the under-carriage-panel. Now, me being the savvy consumer that I am, I’ve already added it to our insurance policy and added rental coverage. The new car goes into the shop before we’ve even had it a day, but we get a free rental while it’s there. (Ford Fusion – I like it alright, but the wife hates it; she has a bit of an anti-Ford bias.) Insurance covers about 1.5k damage for the bodywork, it gets done, we get it back in less than 2 weeks. Yay.

Alright, here’s where it gets fun interesting.

oh-it-gets-better
I don’t know why, but this is one of my favorite lines of his from this movie.

That was all back in December/January.

March rolls around, and I’m driving the Honda home from work. There’s a sudden pile-up of stopping cars in my lane and I can’t stop in time, so I run into the SUV in front of me. Fairly low speed, nobody panics, we pull over and start exchanging info and pictures. Now, my bumper is nearly detached, and my hood is quite scrunched in at the point of impact. This is because I hit his tow hitch, which stuck out from the rest of his rear body quite a bit. So even though he literally has a 1-inch scratch on his bumper, I’m looking at significant damage. But it seems mostly superficial, so I figure, well, I might not even need insurance, and he certainly doesn’t care enough to report it unless I do, so he leaves it up to me.

He helps me rope-up the bumper so it doesn’t fall off (he was such a nice guy, no joke!), and I start driving the rest of the way home. Quite a distance, mind you (I have a 60 mile total commute). After a little while, I start seeing smoke coming from under the hood. Fortunately it’s white, not black, so I know I’m not in terribly immediate danger. But I pull off to a gas station and take a look. Well, I can’t actually open the hood due to the scrunchy-ness, but I peer inside and see that there’s a significant bit of frame damage, and the radiator looks hurt. Sure enough, it would turn out that that was the biggest problem – the radiator (and compressor) would need to be replaced, and the frame around it needed repairing/re-welding.

old-car-on-fire
Not quite…

This is not a small job. I take it to a body shop first, but as they look inside and see what I saw, they know that it’s beyond their scope, so they send me next-door to a full-service mechanic & repair shop. Next day, he gives me the estimate: $2.7k. Now, about this time, I’m talking with the insurance reps. I know they’re going to want to total this car – it’s KBB value is literally just over $2k, and these repairs are significantly more than that. What I was trying to ask them, and never got a straight answer, was whether we could file the claim for a lower amount, by asking for the mechanical repairs only. Remember, this car is a “beater”. We don’t really care how it looks, we just need it to run. And the shop was kind enough to provide that “bare-bones” estimate as well – only about $750.

honda-wreck-damage-picture
You can see where the tow-hitch rammed thru the bumper/grill in towards the frame; the blue arrow points to the frame inside that got the brunt of it.

But then my insurance adjuster did two things that were very insightful & much appreciated.

I have to give a shout-out to Safeco here, because throughout all of this, they’ve been immensely helpful and easy to deal with. (Even though I mentioned not answering my question in the paragraph above, as you’ll see, that was really my own fault for not understanding the process, and it was a moot point anyway!) So if you’re in the market for a new insurance policy, definitely check ‘em out.

First, because this shop was not an official “authorized partner”, she couldn’t accept their estimates as gospel; but, she could offer this newer “pilot” program whereby any shop (or even the customer) could submit pictures of the vehicle and the damage, and, provided enough detail and the right angles, a 3rd party estimator could assess the damage and estimate the cost. Great!

Second, she heard me out as I explained the concern with totaling the car, and understood that I really wanted to keep the car after simply getting it mechanically sound. But, she clarified, because I had collision coverage on this car, they (the insurance company) literally “owed me” the full cost of those repairs or the vehicle value, whichever is lower. So in fact, I would be doing myself a disservice and actually losing money if I tried to simply file the claim for the lower “bare-bones” amount, just to avoid the total-loss.

Instead, she explained, what you can do is keep the vehicle, even after it’s been declared “totaled“.

There’s a process and paperwork to this, and it involves the DMV, obviously. But because the insurance policy will still pay me the value of the vehicle, I should have more than enough to get the minimum repairs done and pocket the rest. Yay!

smiley-with-money
cha-ching!

Now, the process. The CA DMV has done a fairly decent job of documenting this, but it’s still unclear (at least to me) what the order of operations is. There are 5 things you need:

  1. Salvage title (which is different than the regular title, aka pinkslip)
    • DMV form REG 343, which you fill out yourself
  2. Salvage certificate
    • REG 488c, which you also fill out yourself
  3. Owner retention of salvage vehicle
    • REG 481, which your insurance company completes & sends to the DMV
  4. Brake & light inspection (to make sure it meets road safety standards)
    • Certificates are printed & given to you by the inspecting shop
  5. Full vehicle inspection (again, safety & compliance)
    • REG 31, which is completed by DMV personnel only

Number 4 can be done by many authorized 3rd-party shops, most of which also do smog tests and such things, so they’re not hard to find. The rest are DMV forms, as noted above. (#5 can be done either by the DMV or by CHP; but, CHP has quite a narrow list of “accepted” vehicles which they’ll inspect for this purpose, and honestly their appointment “system” for trying to get them done is horrendous, so it’s easiest to let the DMV do it.) But again, what’s the order in which I should do these things? Well, let me tell you!

joker-just-let-me-finish
I’m trying!

First, you get that payout check from your insurance, and you get the repairs done. Then you take the car to a brake/light inspection place (#4 above), and get that “certificate” (much like a smog certificate, it’s an “official” record that says your vehicle passed this test). Actually, if the vehicle hasn’t been smog-checked recently, you probably need that too. Mine was just done in 2016 so it wasn’t necessary.

Ooh! Another database tie-in. Okay, we all know a car’s VIN is like the primary key of the DMV’s vehicle database, right? Plate#s you can change, but the VIN is etched in stone steel. But they’re largely sequential – so two 2000 Honda Accords are going to have mostly the same characters in their VINs, up to the last, say, 2-6 numbers (ish.. I’m nowhere near knowledgeable enough about the system, I’m just guessing based on my observation of what happened to me). So when the paperwork comes back from the insurance, it ends up with the wrong VIN, off by 3 #s at the end. But I don’t realize this until I check with the DMV as I’m filing the accident report. Also, you can use online services to look up a VIN and find the basic info about it, but again, because I had such similar VINs (my correct one, and the insurance’s one from the papers), both turned up the same descriptions, down to the body style and trim level (4 door sedan, LX, if you’re curious). The only way we actually found the mistake was that the DMV was looking up “ownership” info based on the VIN, and when the agent read me the name on file, I was like “whodat?”, since it wasn’t me or my father, and then I went back to my pinkslip and checked it there, as well as on the car’s door-panel.

The lesson here is, always double-check your VIN when filing paperwork, especially with the DMV. Moving on.

vin-number-atm-machine
STOP adding redundant words to acronyms phrases!

Before you go further, you need to actually make sure that the insurance and/or the salvage yard has officially notified the DMV of the vehicle being a “total loss”. (See #3 in the list.) In my case, they hadn’t – it had only been a month (between the actual payout and the first time I went to the DMV). So I have to check again before I go back.

But, since I was there, I made the DMV agent answer all my questions and specify exactly what I needed to do to complete this process, and the order in which to do it. Which is why I’m now writing this and sharing with you!

Once that notification is done, the DMV will have record of the vehicle being a “total loss”, or “salvage”. Then you can make a new DMV appointment, go in, and get #5 and #1-2 done all at the same time, in that order. I.e., go to the “inspection” or “inspector” side first, have them do the inspection and fill out the form (REG 31). Then go to the appointment line and take all your paperwork to the agent that calls you. So that’s your “inspection-passed” form (REG 31), your salvage title form (REG 343), your salvage certificate form (REG 488c), and your brake & light certificates. If you have a copy of the insurance co’s REG 481, might as well bring that too! You also need your license plates – you have to “surrender” them, which means turn them in and get new ones (not that same day, obviously – I think they still mail them to the DMV and you have to go pick them up… but I’ll find out soon).

austin-powers-dmv-live-dangerously
Life on the edge, man!

Finally, to add a little icing on this crap-cake. I was driving the Hyundai to work, literally the next day, and I got rear-ended by another driver who wasn’t paying attention at a red-light. Again, super low speed, minor damage, but, another visit to the body shop for that poor black Elantra, and another week with a rental car. (Hyundai Santa Fe this time, which is actually quite nice, and if we need a small/mid SUV in the future, I’d definitely consider it; but due to my commute, we swapped for another Ford Fusion, this time the hybrid model, which again, I enjoyed, but the wife did not. Hey, you win some, you lose some.)

crap-cake-smiley
Frankly, 95% of Google image search results for “crap cake” were gross and offensive, but this one was almost cute.

So that’s the story of how we totaled two cars (and damaged one car twice) in less than 4 months.

And that’s the reason I’m now taking a van-pool at least 2 days a week.

I’d always been a fairly safe & cautious driver, but I’ll admit, this long commute had turned me into a bit of a road-rager. Impatient would be the polite term. After all this, I’m back to my old cautious slow & steady ways… for the most part. I still get little flashes of panic when I go by the intersection where the Mazda wreck happened, and I’m always reminding the wife to stay cool and drink her water. She’s never had that happen before, and never felt like it since, so I’m sure it was a one-time fluke, but still.. the DMV wants her to re-test to get her license back, even after her doctors cleared her to drive. That’s a whole other topic, for another time. I will note that none of these incidents were due to cell-phone use, so at least we’re not guilty of that particular vice.

keep-calm-and-drive-safely

Thanks for reading!

Now, go out there and DRIVE SAFE.

DBA Holy Wars Part 2

Battle 4: GUIDs vs Identities

This is an oldie but goody.  A) Developers want their apps to manage the record identifiers, but DBAs want the database to do it.  B) Developers prefer abstracting the identity values out of sight/mind, DBAs know that occasionally (despite your best efforts to avoid it) your eyeballs will have to look at those values and visually connect them with their foreign key relationships while troubleshooting some obscure bug.

but-wait-theres-more-billy-mays
there’s ALWAYS more…

But there’s more to it than that.  See, none of those arguments really matter, because there are easy answers to those problems.  The real core issue lies with the lazy acceptance of GUI/designer defaults, instead of using a bit of brainpower to make a purposeful decision about your Primary Key and your Clustered Index.

Now wait a minute Mr. DBA, aren’t those the same thing?

NO!  That’s where this problem comes from!

A good Clustered Index is: narrow (fewer bytes), unique (or at least, highly selective), static (not subject to updates), and ever-increasing (or decreasing, if you really want).  NUSE, as some writers have acronym’d it.  A GUID fails criteria ‘N’ and ‘E’.  However, that’s not to say a GUID isn’t a fine Primary Key!  See, your PK really only needs to be ‘U’; and to a lesser extent, ‘S’.  See how those don’t overlap each other?  So sure, use those GUIDs, make them your PK.  Just don’t let your tool automagically also make that your CX (Clustered indeX).  Spend a few minutes making a conscious effort to pick a different column (or couple columns) that meet more of these requirements.

For example, a datetime column that indicates the age of each record.  Chances are, you’re using this column in most of your queries on this table anyway, so clustering on it will speed those up.

Most of the time, though, if your data model is reasonably normalized and you’re indexing your foreign keys (because you should!), your PKs & CX’s will be the same.  There’s nothing wrong with that.  Just be mindful of the trade-offs.

Battle 5: CSV vs TAB

bluray-vs-hddvd-fight
Who doesn’t love a good format-war?

Often, we have to deal with data from outside sources that gets exchanged via “flat files”, i.e. text files that represent a single monolithic table of data.  Each line is a row, and within each line, each string between each delimiting character is a column value.  So the question is, which is easier to deal with as that delimiter: comma, or tab?

String data values often have commas in them, so usually,the file also needs a “quoting character”, i.e. something that surrounds the string values so that the reader/interpreter of the file knows that anything found inside those quotes is all one value, regardless of any commas found within it.

But tabs are bigger.. aren’t they?  No, they’re still just 1 byte (or 2, in Unicode).  So that’s a non-argument.  Compatibility?  Every program that can read and automatically parse a .csv can just as easily do so with a .tab, even if Windows Explorer’s file icon & default-program handler would lead you to believe otherwise.

I recently encountered an issue with BCP (a SQL command-line utility for bulk copying data into / out of SQL server), where the csv was just being a pain in the arse. I tried a tab and all was well! I’m sure it was partially my fault but regardless, it was the path of least resistance.

Battle 6: designers vs scripting

no-wizard-allowed
Wizards are usually good, but in this case, they’re lazy and bad for you…

This should be a no-brainer. There is absolutely no excuse for using the table designer or any other wizardy GUIs for database design and maintenance, unless you’re just learning the ropes. And even then, instead of pressing ‘OK’, use the ‘Script’ option to let SSMS generate a `tsql` script to perform whatever actions you just clicked-thru.  Now yes, admittedly those generated scripts are rarely a shining example of clean code, but they get the job done, even with some unnecessary filler and fluff.  Learn the critical bits and try to write the script yourself next time– and sure, use the GUI-to-script to double check your work, if you still need to.

Confession: I still use the GUI to create new SQL Agent Jobs. It’s not that I don’t know how to script it, it’s just that there are so many non-intuitive parameters to those msdb system-sp’s that I usually have to look them up, thereby spending the time I would have otherwise saved.

Bonus round: the pronunciation of “Data”

its-data-not-data
Call me “big Data” one more time…

Dah-tuh, or Day-tuh?  Or, for the 3 people in the world who can actually read those ridiculous pronunciation glyphs, /ˈdeɪtə/ or /ˈdætə/ ?  It’s a question as old as the industry itself… or maybe not.  Anecdotally, it seems like most data professionals, and people in related industries, tend to say “day-tuh”; while those in the media and generally less technical communities tend to say “dah-tuh”.  (Where the first syllable is the same vowel-sound as in “dad” or “cat”.)  This likely means that the latter is more popular, but the former is more industrially accepted.

In either case, it doesn’t really matter, because at the end of the day, we’re talking about the same thing.  So if some dogmatic DBA or pedantic PHB tries to correct your pronunciation, tell ’em to stop being so persnickety and get on with the task at hand!

Until next time…

Little Gotchas

If the caller of our stored-procedure literally passes NULL as the parameter value, we might have a problem!

A large part of most DBA/DBD’s daily job is writing & maintaining stored-procedures.  In SQL Server or other RDBMSs, stored-procs (“SP’s”, “procs”, however you like to abbreviate), serve as one of the building-blocks of your overlaying applications and day-to-day operations, including maintenance and automation.

sprocket
This is a sprocket, not to be confused with a sproc, which is really just a proc.

Today, something struck me, and I was both shocked and comforted by the fact that this hadn’t really “come back to bite me in the arse“, as the proverbial saying goes.  But first, some context.

When we declare our proc signature with our parameters, we of course give them datatypes, and often default values — the parameter value that is assumed & used upon execution when the caller (operator, application, agent job, etc.) calls said proc without passing a value to that parameter.  So we create our proc like so:

CREATE PROCEDURE dbo.MyProc @MyParam BIT = 0 AS BEGIN SET NOCOUNT ON; END

So that users are allowed to call it like so, and assume some correct default behavior:

EXEC dbo.MyProc;

Coincidentally, that CREATE line is part of a typical “boilerplate” snippet or template which I use to create procs with “create if not exists, else alter” logic and a nice header-comment-block, which I’ll publish on my GitHub or Gist shortly, so I can show it here.  I know that MS recently added DROP IF EXISTS support to the language, but frankly, I like to keep procs intact if they exist because it’s easier not to have to remember/re-apply their metadata, such as security (grants/deny’s, certificate signatures, etc.) and extended properties.  Wake me up when they add true CREATE OR ALTER syntax!  Oh snap, they did… in 2016 SP1.  Anyway.

Now for the “catch”, the gotcha.

gotcha-programming-wikipedia-def
In programming/software-dev/IT-systems, “gotcha” has a specific meaning.  Thanks Wikipedia!

If the caller says exec dbo.MyProc, that’s great — they didn’t pass a parameter value, so the execution uses the default value (0) and off we go.  However, if the caller is so malicious as to literally pass NULL, we might have a problem!  Because let’s say that @MyParam value is used in a JOIN predicate or a IN (SELECT...) block, or even a CASE expression.  We won’t get an actual error; SQL Server is smart enough to jump over the syntactical variations required for equivalence checking (i.e. Column1 = 0 vs. Column1 is NULL) when it interprets/compiles the stored-procedure.  But, what we’re probably going to get is unexpected or unknown behavior.

warning-assumptions-ahead

It seemed worth re-using a classic…

And really, it all comes back to those nasty things called assumptions.  See, as the proc author, we’re assuming that our @MyParam will always be a 0 or 1, because it’s a BIT, and we gave it a default value, right?  Sure, maybe in another language, but this is T-SQL!  NULL is a separate and distinct thing, a valid value for any datatype, and must be accounted for and treated as such.  It can get especially dicey when you have a NOT IN (SELECT...) block that ends up as an empty-set, which suddenly morphs the outer query into a “without a WHERE clause” beast, and.. well, you can guess the rest.

So what do we do about it?  Well, we can add a “check parameter values” block to the top of our procedure where we either throw an error, or set the NULL value back to a default.

Examples:

IF (@MyParam IS NULL) RAISERROR ('@MyParam cannot be NULL; try again.', 15, 1);
IF (@MyParam IS NULL) SET @MyParam = 0;

We could also work on the internal proc logic to account for NULL values and “eliminate the guesswork” (i.e. prevent unexpected behavior) by actually having logical branches/conditions which “do something” if the parameter is NULL.  Then, at least we know what our proc will do if that infamous caller does exec MyProc @MyParam = NULL.  Yay!  But that sounds like a lot of work.  Maybe.

Or maybe it’s worthwhile because you actually want NULL to be treated differently than all other parameter values, and then, hey, you’ve already spent the time on that logic, so you’re done!

what-if-i-told-you-null-does-not-equal-null
But NULL does not NOT equal NULL, either!  Crap, somebody give me the red pill…

I hope this helps somebody else avoid the same assumptions.

Adventures in SQL Cluster Stacked Instances

Enough with the pitchforks; this is Test/QA. Here, I talk about 3 gotchas.

Today we’re going to talk about SQL Server instance stacking.

blasphemy-300
Blasphemy of the highest order!

Right, in production.  I’m talking about DEV/TEST environments.

Still, blasphemy!

Settle down.  If your server is set up correctly and has the resources you want it to have, and you divide your resources up per instance in a few very simple ways, it’s fine.  Enough with the pitchforks, the wailing and gnashing of teeth.

okay-okay-calm-down
Stop scaring the pandas.

Okay, now that that’s out of the way…

Remember our cute little DEV server?  So, the way he’s set up is, he’s got 3 SQL Server instances on him, each with its own dedicated SSD, and another dedicated SSD just for tempdbs.  Ideally, we’d have a separate SSD for each instance’s tempdb, but sadly, motherboards with 3 M.2 or NVMe slots aren’t (weren’t?) in production at the time, at least not for desktop class systems.  But I digress.

This is called instance stacking.  And yes, it’s a big no-no in production.  Mostly because performance troubleshooting is a pain in the arse.  But also because it’s more difficult to divvy-up resources like RAM and I/O & network throughput channels than one would like.  But it’s super simple to set up — you simply run the SQL Server installer 3x, each time creating a unique instance name.  Then, at the end of it, your SQL instances are addressable by MachineName\InstanceName, e.g. SQLDEV\Foo, SQLDEV\Bar, etc

Now the time came to create a “QA” environment.  Which, like DEV, didn’t need to be very performant (that’s a made-up word that consultants like to use, but it’s generally accepted in our industry so we go with it), and so, since we had some hardware laying around from a recent “up-gration” (upgrade-migration… okay, now I’m being ridiculous), we said “let’s use that thing!”.  It was a 2-node cluster setup with shared DAS storage.  For the uninitiated, DAS is Direct Attached Storage, i.e. an array of disks that you can directly attach to 1 or more servers using whatever interconnect is available on the endpoints (usually SAS, serial-attached SCSI  – which is one of most fun acronyms to pronounce in IT: “scuzzy”).  DAS is not to be confused with a SAN, Storage Area Network, which is a super fancy storage array with performance tiers and snapshot technology and de-duplication and all that hotness.

NAS-SAN-DAS-diagram
NAS, SAN, DAS – 3 acronyms, 1 underlying purpose, 3 implementations.

The interesting thing with a cluster is, when you install SQL Server instances, you can’t actually use the same “MachineName” for the 3 different “InstanceName”s.  Because in a cluster, the former is actually the “VirtualServerName”, which must be unique per clustered instance, in order to properly configure cluster resources, storage pools, and networks.

The reason this is interesting, is that it contrasts with stacked instance setup on a standalone server (non-clustered).  So if you compared our DEV and QA setups side-by-side, it’s a bit odd-ball: instead of SQLDEV\Inst1, SQLDEV\Inst2, etc., we have instance names like SQLQA1\Inst1, SQLQA2\Inst2, etc.  That makes the ol’ “find and replace” in config files a bit harder.  But, at the end of the day, it’s all just names!

find-and-replace-diallog
One of the handiest tools in an engineer’s toolbox!

Another interesting “gotcha” revolves around SQL 2008R2, which I know shouldn’t be on the short-list of versions to spin up, but unfortunately, a legacy ERP system demands it.  Well, it only happened to me with the 2008R2 instance installation, not the 2016’s, but that’s not to say it couldn’t happen with others.  Anyway, after installation, SQL Agent was not working; it wasn’t coming up as a cluster resource.  Basically, exactly what was outlined in this timely & detailed article at mssqltips.  I won’t restate the fix instructions here, just give it a read!  I do want to clarify something though.

In part of the fix, we use the handy-dandy PowerShell cmdlet Add-ClusterResourceDependency .  In its basic form, it requires 2 arguments, Resource and Provider.  To someone who’s not a cluster expert, this terminology might be a bit confusing.  Resource in this case is the SQL Server Agent, while Provider is SQL Server itself.  But we’re adding a Dependency, right?  Which depends on which?  Well, we know that Agent depends on the engine, so, Resource depends on Provider.  Yes, I know, that’s what the article tells you to do — I just like to understand why.

can-you-tell-me-why
Fry shares my curiosity…

Finally, there’s the question of divvying-up resources to the stacked clustered instances.  Now, in a standard cluster, you’ve got your active node and your passive node.  But if we’re stacking instances, we might as well split the SQL instances up and take advantage of the compute resources on both nodes.  (Storage is still shared; this is a cluster, after all!)  The CPUs are no problem — however many instances are stacked on a node, they’ll share the CPU cores pretty cooperatively.  Memory is a bit of a different story.  We want to take advantage of all the available RAM in the cluster, but…

As you know, you can configure each SQL instance to use a set amount of max. server memory.  So let’s say each cluster node has 32GB RAM, and we’re stacking 4 SQL instances total (to keep the math easy!).  If we split them up among the nodes at 2 each, each instance can use 16GB.  But if for some reason a node goes down, and all 4 instances move to 1 node, now they’re fighting for that 32GB!  So we should reduce their max-memory settings to 8GB each, instead of 16.  But we don’t want to do this manually!  Fortunately Aaron Betrand has an excellent blog post on the subject, with some useful ideas about how to do this dynamically & automatically.  The only issue I have with it is that it requires the linked-servers to use a highly privileged account (sysadmin or maybe serveradmin role) to be able to set that max-server-memory setting.  But wait, remember what we said at the beginning?  This ain’t production!  Who cares about security?  (That’s facetious, sort of — in reality, yes, we don’t care as much about security in lower environments, but we should still care a little!)

when-i-forget-my-password-is-incorrect
We all do silly things from time to time..

That concludes this week’s adventure!  Thanks for reading.