T-SQL Tuesday #122: Impostors!

Be humble, but be confident in your own expertise. Realize that you will sometimes be the teacher, and sometimes the student.

Howdy folks! It’s been a while. My last post was in October of last decade (har har)! I’ve been busy with other projects, work, and life in general. 🙂

Now, this month’s invitation is hosted by John Shaulis. I’m always excited to see someone in the #SQLcommunity that I’ve never heard of before. Looks like he’s familiar with the same WordPress templates as I am. Heh. I like it.

Side-note, I’ve been planning to buy my own domain and remove the ads here for a better reading experience (and more professional feel), so hang in there… it’s happening soon!

And now without further ado.

Who, Me?

Everybody reading this knows what impostor syndrome is. I won’t bore you with that. Fact is, if you’re in tech, you either have it or you know someone who does. If neither, well you’re probably a narcissist surrounded by other narcissists and you might wanna look for another job. Heh heh.

oh, get a job? just get a job? why don't I strap on my job helmet, get into a job cannon, and fire off into job land, where jobs grow on jobbies!
I think this is from “It’s Always Sunny in Philadelphia” but I’ve never seen the show.

Yes, You!

I’ll share a personal and ongoing example of where I’m constantly feeling “not good enough at my job”. It’s about server migrations. You know, where you take a running production SQL Server instance, and you have to move it to new hardware/infrastructure. Maybe it’s a physical (bare metal) cluster to a virtualized environment. Maybe it’s just one VM to another VM on a newer platform. It typically involves taking a maintenance window and convincing business stakeholders that the “downtime” is worthwhile.

Now surely we’ve all heard of DBATools by now, right? (If not, go there right now and check ’em out!) They were BUILT to do migrations. Yet I’ve not been able to successfully use them as the whole and ONLY toolset for that purpose. Let me clarify: I’ve used pieces of the framework (such as the copy logins / users command, and others) to HELP me in the migration project, but never been able to simply fire off Start-DbaMigration and go get coffee and watch as things magically successfully fall into place.

dog in a room on fire, casually drinking coffee, saying "this is fine."
It’s working EXACTLY AS IT SHOULD!

Couldn’t Be!

Even when I’ve built a beautiful check-list, lined up all my scripts, scheduled things via Agent Jobs, double-checked names and permissions and networks and drive letters and shares. Still, something inevitably goes wrong. Maybe it’s just that a database refuses to go into READ_ONLY mode when I ask it to. Maybe it’s that a whole series of jobs get lost because they were in the wrong category. Maybe user permissions don’t come over on this database because I didn’t sacrifice a chicken on the night of the blood-moon while the wind blew north-east.

Surely, by this point in my 10+ years as a database professional, 4 years at this particular company and role, I would be able to do this with my eyes closed. With no hiccups or misfires. Right? RIGHT??

murphy's law: if anything can go wrong, it will go wrong.
Full plaque samples, which are quite entertaining, can be seen here & here, to name a few.

Then Who?

Wrong. Murphy’s Law is a real thing. More than that, things CHANGE. The environment is always evolving to fit the business needs. The technology is always just a little ahead of my own knowledge-base. And every SQL instance in this environment is, unfortunately, still, a ‘pet‘. Not a ‘cattle‘ (cow?). They’re each carefully and fearfully constructed to suit a specific set of application and business needs, each with their own nuances and barely-documented dependencies.

Plus, it’s not like we’re doing these migrations very often. Anything you don’t do on a constant basis, at least a few times per week, tends to sink back toward the bottom of the mental stack, and you forget the details and gotchas that were involved because your brain needs to keep more important and relevant things near the top, for what you’re working on “right now”. This is normal, at least in my mind.

DevOps stole the cookie from the cookie jar!

Just hire a DevOps Engineer to solve all your problems. Then everything will be all sunshine and rainbows and unicorns. All your servers will be cattle, replaceable little containers that are allowed to fail because there’s always another one waiting to spin-up and replace it. Not like your DATA is important or anything. It’s only the lifeblood of the entire company.

Not really sure where I was going with this. I love DevOps, don’t get me wrong, and it does have a part to play in making the traditional RDBMS platform more agile, but the devilish details and difficulties therein are ALWAYS overlooked/brushed-off in popular discourse, and it’s got me a little jaded.

I mostly wanted to finish the song I’d started with the title-blocks. =P

N.

But srsly. Instead of feeling guilty about your impostor syndrome, just own it. Be humble, but be confident in your own expertise. Realize that you will sometimes be the teacher, and sometimes the student. And hopefully more often the latter, because there’s exponentially more to learn every year we evolve on this crazy spinning rock we call Earth.

live as if you were to die tomorrow. learn as if you were to live forever. -Mahatma Gandhi
Good advice in general.

Dealing with Time Zones in SQL

In my sample script, I have a “transaction table” where I insert a few events for each office, with the ‘original date’ in PST, and I show how to easily convert that to each office’s local time, or visa-versa.

I’ve ranted about times, datetimes, and the like before. As most programmers & IT pros do, I loathe time zones with a mild passion. As one of my favorite #SQLCommunity members quips:

daylight savings time was actually created by a government works project to ensure that programmers could forever write tedious conditional logic in their date-based queries [and code].

Bert Wagner

Thankfully, in SQL 2016, MS has somewhat heard our outcries and given us an easier way to convert things between time-zones in a somewhat sensible manner.

Resources:

  • The at time zone keyphrase
  • The system metadata table that drives it
  • A good blog post that explains essentially what I’m about to (because it’s great to have different perspectives and unique voices!)
  • Two StackOverflow answers (one, two) on the topic
  • Bert’s succinct and fun post about the same topic, with video!
give me utc or give me death
This is Patrick Henry. He famously said “Give me liberty or give me death!”. Now you know.

A Use-Case

In fact, I’m not even going to repeat what 2 (and many more, I’m sure) people have already said. Go check out their posts above! But since you’re here, this is how I use it for my reporting environment.

To start with, the transactional data is in PST/PDT — i.e. Pacific Time with DST fluctuation. Yes, it’s horrible. No, I don’t know what happens to events or jobs at 2am on the “Fall Back” date, or between 2am and 3am on the “Spring Forward” date. No, I can’t change it right now. Stop whining.

Now, I have offices in Paris France, Hong Kong, and Beijing China. These are 3 different “time zones”, but only 2 different offsets — China and Hong Kong are in the same bucket, namely, UTC +08:00. More on that later.

So I have my OfficeLocation lookup table:

Office  | TimeZone
--------|-------------
Paris | 'Central Europe Standard Time'
Beijing | 'China Standard Time'
HK | 'China Standard Time'

(Again, see below for why we can’t call HK’s zone “Hong Kong Time” like most websites/APIs would assume.)

Now, the cool thing about this is, we can pull those strings into a variable, or use them straight from the table, to convert our PST/PDT times to the appropriate zone.

Here’s a variable example:

DECLARE @TimeZoneStr sysname; --"sysname" is just nvarchar(128)
SELECT @TimeZoneStr = TimeZone
FROM OfficeLocation
WHERE Office = 'Paris'

DECLARE @MyTimeNow datetime = GETDATE();
DECLARE @TimeInParis datetime;
SELECT @TimeInParis = @MyTimeNow
AT TIME ZONE 'Pacific Standard Time' --Converts to datetimeoffset
AT TIME ZONE @TimeZoneStr --Shifts it to Paris time

PRINT ('The time now in Paris is ' + CONVERT(varchar(30), @TimeInParis, 121);

And here’s an example using the field straight from the table.

SELECT Office
    , [Time in my location] = GETDATE()
    , [Time in remote office location] = CONVERT(datetime, GETDATE()
        AT TIME ZONE 'Pacific Standard Time' 
        AT TIME ZONE ol.TimeZone)
FROM OfficeLocation ol

See the Gist for a full-fledged sample script. In it, I have a “transaction table” where I insert a few events for each office, with the ‘original date’ in PST, and I show how to easily convert that to each office’s local time, or visa-versa.

working with time is easy -jon skeet
Well YEAH, if you’re frickin’ JON SKEET! =P
But srsly, go watch his presentation and read his stuff.

Gotchas

Here’s the major catch. The information available to YOUR instance of SQL Server is pulled from that server’s Windows Registry hive. No, I’m not making this up. So if that box doesn’t know about, say, ‘Hong Kong Standard Time’, and you try to use that in your SQL statement.. you’re hosed.

And yes, that is a real example from my own experience.

This article shows the “Windows standard format” time zone list. As you can see, they merged some zones with others because.. they felt like it? But apparently Central Europe Standard Time, Central European Standard Time, and Romance Standard Time (all UTC +01:00) were completely necessary to keep separate. Go figure.

In my use-case above, then, I couldn’t actually store the string Hong Kong Time, because my SQL instances (hence my Windows Servers) don’t know what that is. Thankfully, at least for this decade, it doesn’t look like Hong Kong and China will diverge in terms of their geopolitical directions, and we’re safe to assume that HKT = CST (China Standard Time, not to be confused with US Central Time!).

In another example, the typical go-to site for timezone questions says Japan observes “Japan Standard Time”. Obviously enough. But Microsoft, in their infinite wisdom, decided to call that “Tokyo Standard Time”. Go figure again.

It also kinda makes you wonder.. how does this work on SQL on Linux? No such thing as “the registry” there. I’m sure there’s an internal OS data-store that houses time-zone info, of course. Heck, they might even be better at it than Windows. But it makes you think.

world time zone map
Disclaimer: this is probably already out-of-date. And it’s from Wikipedia.

So What?

If you’re not already running SQL 2016 or upward, this should give you yet another compelling reason to upgrade. Seriously.

And don’t do what I did and attempt to store a “business locations with time-zone offsets” table, that you have to remind yourself every 6 months to go update (manually), and will inevitably fail to do so, and will not support any sensible manner of long-term historical reporting.

More to the point, don’t try to implement dynamic time-zone logic and calendaring yourself, in general. Because trust me, you’re not gonna get it right. Use the built-in tools, use the community resources, and be smart.

That’s all for today! ❤

T-SQL Tuesday #119: Change of Mind

Get up, come on get down with the SPACES!

Because I’m super late (as usual), about to go to bed (because I’ll be commuting tomorrow), and lazy (aren’t we all?), this will be a quickie.

This month’s #tsqltuesday hosted by the amazing Alex Yates, in which he asks us to discuss something about which we’ve changed our minds over the course of our career (or some subset of time therein).

I’m really excited to read some of the submissions I’ve skimmed on the Twitter feed so far, such as Oracle vs. MS-SQL and the importance of diversity. After all, what else am I gonna do while I sit in the vanpool for 3 hours? (round-trip, thankfully, not one-way!)

Tabs vs. Spaces

Oh my! Them’s fightin’ words. Even back in the early days of this very blog, I wrote about my preference for tabs. But now.. *gasp*.. I’m down with the spaces!

blasphemy-300
Blasphemy of the highest order!

Why, you ask?

Well, partially because I’ve changed some of my overall T-SQL coding style preferences and methods of construction. When I learned the Alt-Shift select method (block selection, aka vertical selection) in SSMS, it definitely set me on a track away from tabs. Now I don’t go all cray-cray with vertically aligned sections/clauses/etc. too much, but I will say that in certain instances, it’s made the query much easier to read. And in such instances, spaces definitely trump tabs for ease-of-use with said vertical-alignment efforts.

If you’re not sure what I’m talking about (because, admittedly, it’s hard to write about and much easier to show visually), just search Youtube for an example of SSMS block-select tricks.

And this is within the last 4 years, so I still find old stored-procs that I’ve written that have the tabs, and I chuckle slightly to myself as I Ctrl-K-Y (that’s the Red Gate SQLPrompt shortcut to ‘format this code in my current style’) and make my modifications.

Miscellaneous Little Things

I’ve developed some other preferences, too, which contradict some of my old formative-years’ habits. For example, I used to write my TSQL in pure lowercase. I now prefer the ANSI-CAPS for language constructs and keywords, but if I ever need to write dynamic-SQL, it goes in lowercase.

caps-lock-not-always-necessary
I even re-used old images, instead of finding new ones. LAAAZZZYY!! :O)

Some of these habits come from Aaron Bertrand and other SQL-community big-name bloggers. Like preferring CONVERT over CAST, or changing from trailing-commas to leading-commas. (Although he may have flipped on that again, I can’t remember.) While others just kinda happened organically. Like, two tabs for the ON line under each JOIN — the join predicate — just felt silly after a while, so I reverted to one. I used to be stickler for forcibly quoting identifiers that collided with language keywords — like if you have a column named Date or Value, you best be puttin them square-brackets around those suckers ([Date], [Value]), but now.. honestly, I don’t care enough to bother. Unless you do something really heinous, like timestamp. =P

Anyway, that’s all I have for now. There are much more important things that I could, and should have, written about, but as I said, and as always, I’m already late to the party. ‘Til next time! ❤

T-SQL Tuesday 118: Fantasy Feature

It really shouldn’t be this difficult. But it is. And that’s why we get paid. Still, it’d be nice if load-testing were easier, wouldn’t it?

Aka “hey look at us Microsoft, we want stuff!!” Because they shut down Connect and its replacement (Azure Feedback aka rebranded UserVoice) is awful. Just plain terrible. In unrelated news, I’ve never been an MVP.. wonder why? 🙄😜

Anyhoo, this month’s invitation is brought to us by the lovely and talented Kevin Chant. He asks us to fantasize about SQL Server. No, not like that Erik, get your mind out of the gutter.

And I don’t apologize. At all.

This IS a complaint. But hopefully it’s also an idea for those who are better at building stuff than me to.. ya know.. build stuff.

And shout-out to my favorite fellow blogger Shane (@SOZDBA) who’s too polite for his own good. ❤

Load Testing is HARD.

Too hard. So hard that nobody does it. At least not productively, efficiently, or willingly. About the only times that I can personally point out an instance where I’ve actually buckled down and done something roughly comparable to a true load test (which I’ll talk about in a minute), was when I was forced to do so by my managers to prove that a hardware environment upgrade hadn’t gone awry, and that our servers truly were running at least as good as, if not better than, before.

But guess what? We never really proved anything conclusively. We had inklings, feelings, warm fuzzies.. okay maybe a DiskSpd output file or two.. which indicated that things were “mostly probably pretty good and kinda sorta better.”

Why? BECAUSE IT’S FREAKING HARD.

What is “True Load Testing”?

a graphic representing software load testing
Something like that… maybe?

Glad you asked. Simply put, it’s the ability to execute these 3 steps, easily and efficiently, with minimal configuration overhead and without needing to pour agonizingly over tomes of docs:

  1. Capture and store a SQL server workload — i.e. ALL the transactions run against an instance in a given time frame — AND performance metrics from said instance while said workload was running.
  2. Run that captured workload against another SQL server instance, and capture THE SAME performance metrics.
  3. Compare results from each set of gathered performance metrics, with a concise, easy to understand rating system that tells you which instance ran better and why.

Now, could we argue that the “capturing” of #1 adds some overhead to the instance? Sure! So I’m fine with NOT gathering the performance metrics during the same window as the workload-capture. Put it off to step 2, where we replay said workload against 1 or more instances and measure the performance on them. So we could replay the same workload on the original instance, and a new one, and we’d have our two sets of measurements to compare.

Got me? Good.

The Plumbing is There

I know. I know what you’re screaming at your monitor/screen right now. “But Nate, that’s exactly what Distributed Replay is for!!”

Bruh, have you even USED Distributed Replay?!? It’s way too complicated to set up, let alone manage and operate. Remember what I said about tomes of docs? Yeah. ANGTFT.

That’s the sad part. Microsoft has built up the plumbing and scaffolding for all of this over the past few decades. But we’ve yet to see that final layer, that chrome polish and finishing touch that makes the user go “Ahhh, now THAT was an educational and enjoyable experience!”

red easy button
Staples’ trademark be damned!

Obviously when it comes to performance metrics we’ve got a huge wealth of knowledge in the system DMVs. Great! Now let’s condense and simplify those into like 4 key ratings of your instance, for those of us who aren’t Paul Randal or Glenn Berry.

Oh, 3rd party monitoring products you say? Sure! Great! Love em. Do they do what I just said? Nope. Because “it depends.” Anybody else sick of hearing that?

It Really Shouldn’t Be This Difficult

But that’s why we get paid. Because it is. And no matter how many cloud services Azure & AWS try to shove down our throats, the reality is that enterprises will continue to rely on human engineers to prove (or disprove) that NewFancyServerX is better than OldCrappyServerA for running YourTerribleSqlWorkloadZ.

Because we can’t architect perfection. And we live in the real world where business decisions and financial constraints have an actual measurable impact on our technology stack choices and roadmaps. So I’m not saying it’s inexcusable that we don’t have this — this easy, measurable, understandable toolset for performance-load-testing — yet. I’m just saying it’s mildly annoying. And perhaps a little frustrating.

With that, I think I’ve written two angry rant-y posts in a row, so I do apologize to you, dear reader (but not, and never, to Microsoft). I’ll leave you with this cute picture of my dog being ridiculous, because it always makes me smile. Til next time!

husky dog laying on back in silly position
doggo being doggo

Intermission: Update Fatigue

Just try to be conscious of how inconvenient it is to be constantly asked for updates all the time.

Wow it’s been a while! My apologies dear reader. July and August came and went far too quickly. While I try to cobble together part 2 of my replication post, allow me a short interim rant.

Software updates are a fact of life

Sure, I get it. Everybody wants to keep their apps up-to-date and patched against all these vulnerabilities and exploits that the forces of evil come up with every day. Fine. Or the eager developers want to release new features that marketing (ugh, marketing) promised to stakeholders. Whatever.

oprah yelling "you get an update and you get an update"
EVERYBODY GETS AN UPDAAAAAAAAAATE!

Can we all admit that we’re getting just a little sick and tired of it? I mean seriously. Seems like every damn day something yells at you from your phone or your tablet or your laptop or your watch or your smart-TV or your talking refrigerator (well, hopefully not, but I’m sure it happens) wanting a new update.

And I work in the freakin industry, for god’s sake! I KNOW these updates are generally for the best and generally a good idea to install sooner than later. But it still makes me grumpy.

Yes, we’re all Agile and DevOps-y and Unicorn-y

And all those other silly buzzwords. That’s great. Really, I’m not suggesting we go backward. There’s no arguing that, as a general function of the evolution of the software development lifecycle and the push for better build-test-release-deploy-operate-feedback-repeat pipelines, overall software quality and user-experience has improved.

yeah science bitch
Because reasons.

Yet, sometimes, it’s super inconvenient. How many of us have bemoaned an unintentional Windows update that sucks up hours of our productivity time just because we didn’t know enough or pay enough attention to catch the “do this later” option? If it was even given!

Another example. iTunes had been begging me for weeks to update my phone’s OS, whenever I plugged it into the laptop just for charging (sure, I could not use a USB port and switch to a pure power source, but again, convenience!). So I finally let it, thinking “Oh this’ll only take a few minutes”. 15 minutes later, late to catch my vanpool ride from work… You get the picture. And why? Because Apple just HAD to give me all these new features.. that.. wait for it.. ONLY apply to iPhone X’s and above! (I have an 8+). Hmm. Something seems maybe not quite ideally efficient here.

Yeah yeah, platform consistency blah blah unified codebase blah blah. Spare me. They have the resources to make this a smarter, more bespoke process. But that’s not the point.

Even now, at this moment, Red Gate’s SQL Prompt (and I love this tool, don’t get me wrong) is asking me to update it from 9.5.14 to 9.5.15. Does it give me any features or fixes that I really care about? Doubtful. Does it bug me every time I start up SSMS? Yep. Can I dismiss it or say “remind me later” or “skip this version”? Of course! So at least they’ve given me that courtesy.

So what IS your point?

You ask me that a lot, don’t you?

All I ask is that developers, in general, be more conscious of how inconvenient it is to be asked to update their apps all the time. Architect things in such a way that back-end fixes and improvements are de-coupled from the UX/front-end. As much as possible. Obviously this isn’t always feasible, and sometimes you literally do need to fix the UX. Great! But with more careful, thoughtful design, this should be far less frequent.

yo dawg i heard you like windows updates
You can’t go wrong with the classics.

‘Should’, of course, being the operative word. We’re still human. We still design and create systems with human assumptions and human error. I get it. Believe me, my code is FAR from perfect. If I had to put out a fix to every stored-procedure I wrote as often as they were found, by a user-base of any more than just myself and my dozen developers, I’d go insane. (-er.) Fortunately, those don’t require people to download an update package and wait for it to install. 😉

Anyway. Hope you enjoyed this rant. Now go update your apps and tools because they’re important. And probably vulnerable to some new zero-day exploit that’s going to take over your system and steal your cookies and bitcoins. =P

Replication “Just Trust Me”

For what seems like years, I’ve bemoaned the fact that SQL Transactional Replication doesn’t come with a “Just Trust Me” option. I’ll explain more about what I mean in a moment. The other thing I’ve complained about is that there’s no “Pause” button — which not entirely accurate, since obviously you could just stop the distribution and subscription agents. But specifically what I mean is, it’s not easy to ‘put it on hold so you can make some schema changes to one of the tables that’s being replicated’, and then easily “Resume” it after you’re done with said changes.

Well, I’m happy to say that now I have both of these tools/methodologies in my arsenal!

Quick level-set: If you’ve been living under a virtual rock, SQL replication is an old-hat “tried-and-true” method of producing readable copies of your data on other SQL servers, whether for reporting or DR. It’s not an HA technology per-se, although I suppose you could use it for that if you were feeling adventurous. It’s more for “I need a reasonably up-to-date copy of my data ‘over there’ so I can run reports / crappy user-formed / EF-generated queries against it without slowing down my production OLTP system.”

Yes, I did just take a pot-shot at Entity Framework. #DealWithIt

i don't always break replication but when i do it drives me to drink
But not Dos Equis. That stuff is terrible. =P

Why?

The word that comes to most DBA’s minds when they think of replication is ‘brittle’. And for good reason — when it breaks, it breaks hard, and you’re often left trying to pick up the pieces while wondering how much worse it could be if you just started over from scratch (i.e. dropped all the replications and re-created them). Which, honestly, sometimes is easier. But not if you have a large volume of data, and certainly not if that data is indexed and you don’t want your apps to experience a performance-crisis!

Now, because this post has been sitting in my ‘Drafts’ area for far too long, I’m going to break this up into 2 parts, so I can get something out the door. In part 1, I’ll briefly explain each of the key components of the process. In part 2, I’ll dive into a little more step-by-step detail.

Primary resources that went into this: docs, article1, article2, article3. And my very own dba.SEanswer where I apparently went through a similar process back in 2016 and subsequently forgot about it (mostly).

Key 1: Sync-Type

TL;DR: the “Just Trust Me” option is, when you create the subscription, sys.sp_addsubscription, specifying the @sync_type = 'none' parameter value. Huge thanks to @garethn in the SQL Community Slack.

Sidebar: if you haven’t yet joined the SQL Community Slack, WHAT ARE YOU WAITING FOR?!?!? DO IT, DO IT NOW!!!

Ahnold ‘teh Governator’

@sync_type = 'replication support only' may be applicable in some scenarios as well, but I’m not 100% clear on the difference / use-cases at the moment. More to come later, hopefully.

Key 2: Script Publication Procs

Protip: sys.sp_scriptpublicationcustomprocs @publication = 'PublicationName' generates the internal repl-procs that control the table creations/updates on the subscriber. You run this ‘script’ command on the publisher, then get the results (the script it generates), copy-paste to a new SQL file, and run on the subscriber.

This has come in handy on several recent occasions, wherein I had to either swap tables behind-the-scenes due to a PK change, or make a column & index change that involved truncation. Using the “stop, shuffle, start” method, which I’ll get into in part 2, I’m able to tell the subscriber “Hey, the definition of this table has changed, you need to grab these new repl-procs so you can handle it correctly!”

Key 3: Publication Properties

In order to tell our publication that “We’re gonna be making some changes, don’t panic!”, we want to turn OFF 2 properties (assuming they’re true, which they likely are by default) using sys.sp_changepublication @publication='MyPub'. The properties are 'allow_anonymous' and 'immediate_sync', and you simply append the arguments to the proc call like so: @property='allow_anonymous', @value='false' / @property='immediate_sync', @value='false'.

Later, after we’re all done with our under-the-hood changes, we’ll want to turn the back on, in reverse order: first enable 'immediate_sync', then 'allow_anonymous'. Cool? Don’t ask me why; DBAs much smarter than I have decreed it so.

OMG, remember Xena Warrior Princess? Holy wow that’s some nostalgia for ya.

Honorable Mention: Pull Subscriptions

In one instance, I was using a PULL subscription (as opposed to PUSH). I had to re-start the Distribution agent (on the subscriber) twice for it to work (to start actually synchronizing). It STILL shows as ‘Uninitialized Subscription’ in the repl-monitor, though. Kinda annoying.

Pull subscriptions can be nice because they shift the burden to the subscriber DB, so that your publisher (master, primary, whatever you wanna call it) doesn’t get too bogged-down. But as always, there are trade-offs. Check out this handy little comparison guide on the topic from a fellow DBA blogger.

That’s all for now; stay tuned for more as I go into detail about how I used these in what scenarios. Thanks for reading! ❤

Facepalms Per Hour

My current velocity is sometimes measured in FPH – facepalms per hour.

This is a rant. Fair warning.

I guess the new ‘Millenial’ colloquialism for “grumpy” or “sarcastic” is “salty“. So I’m feeling extra salty this week. For several reasons. One, it’s audit season. Two, I had to churn out about a dozen new reports in the span of 4 days because the manager who was supposed to be tracking that project dropped the ball and forgot they were due by the end of this month until… yeah, last Friday. Wheeeeee!

Thus, I decided, my current ‘velocity’ (a SCRUM/DevOps term for “how much work are you getting done”) shall be measured in FPH – Facepalms Per Hour. Currently I’m at 3. Earlier this week I was approaching the double-digits, when the lovely report consumers kept thinking of “just one more little thing” they forgot about until after I’d delivered the ‘final’ product.

‘Final’ actually being a meaningful adjective in this context approximately NEVER.

facepalm original picard
The original gangsta.

Change Logs

How best to describe this scenario while still maintaining separation of “real job” from “blog land”… Hrm. So let’s say we have a CRM, like most companies. This stores customers, among other things, in a database. And since it also stores sales transactions and financials, it’s heavily audited — it has a lot of change-tracking mechanisms.

Now, auditors come along and want a report of some specific type of change over time. I happily oblige. Then… PANIC! And not at the disco. “What are all these changes to these customers by these users who don’t have permission to make said changes?!?”

K, calm down sparky. Try not to sound the alarm; auditors are a sensitive bunch.

Turns out, those changes are, in a word, “fake”. You see, there’s this background “customer sync” process that keeps them up to date with another part of the CRM where the actual changes were made. But, because it’s written poorly, it thinks that ANY field change, even just the Name or Address (which a lot more CSR’s, customer service reps, have the permission to change, because, you know, that’s their job), constitutes a change to the ENTIRE customer record on the other end. So the change tracker logs a change to every single field on the receiving end of that sync process, even though nothing really changed on the source side except maybe one or two fields.

With me so far? Great. So now the question is, “Well, can we get a report that doesn’t show those ‘fake’ changes?” But wait, it has to be “system generated” and you’re not allowed to “filter” or “add special exceptions” to it, because it still needs to be audit-able.

So what you’re saying is, give me a report that shows me what I care about, but you’re not allowed to change the logic behind said report.

Riiiiiiight.

So I give them a new report. I don’t explain how the sausage is made, I just make it and serve it up. “But why is this different from the original report?”

double facepalm

Well, do you want the audit-able answer, or the real answer? The audit-able answer is, “We made a system change that allowed us to prevent the ‘fake’ changes from being logged incorrectly.”

The real answer is, “B*tch, I AM the system!” — meaning yes, I excluded those with some hacky logic, and you need to stop asking questions about it.

Anyway. Change Logs are super fun.

Reports

Speaking of reporting. I could really go on for pages about how terrible and broken this whole system of “request-based report development” is. But it’s frankly all we have right now. Until there’s sufficient business buy-in to the concept of agile data warehousing and collaborative cross-functional data modeling, shit just comes in one funnel and goes out another with a little sparkle spackled to it. And we call it a report.

Example, you say? Sure! Let’s say we run a special sale on certain types of widgets every quarter. We want to track how these ‘specials’ perform — do they increase our sales of those widgets? By what factor, compared to the other not-on-sale widgets? Can we trend this over several quarters?

Oh but wait. The data structures that govern widget pricing and time-span-based sale pricing, and the logic that relates customer orders to what pricing structure they used at the time of ordering, is awful, terrible, and changes every time there’s a new quarterly promotional sale.

So you’re saying you want a report that trends sales of widgets based on arbitrarily changing promotional pricing as compared to other widgets that may or may not be subject to ‘normal’ pricing during that same time period, all without a simple definitive data-point that says “This is a Quarterly Promo sale, and That is Not.”

elrond's facepalm
Mister Anderson… I mean, wait.

Let’s try to get at the root of the problem, shall we? The business doesn’t seem to understand that the way they implement promo-sales is detrimental to long-term/comparative reporting. The data model makes this harder, not easier. Can we perhaps put some heads together and come up with a compromise that both A) makes more business sense, and B) improves the data model to be a bit more intuitive?

In Closing…

What’s your FPH? What causes you to facepalm on a regular basis? Let me know in the comments!  :o)