Blog

Off-Topic: A Short Story

As the rest of the years dragged on, I would always look back fondly at that first exhilarating victory.  There was nothing quite like it.

Advertisements

The wifey has been obsessively binge-ing Netflix’s “13 Reasons Why” recently.  It’s a fantastic show that addresses real teen issues in a respectful yet thought-provoking way.  It made me want to reminisce a bit about my own high school years, and really try to think about why and how it wasn’t all that bad.  And don’t get me wrong; I understand that my experience is probably not noteworthy, and I actually count myself fairly lucky to have had, essentially, an unremarkable four years.  It’s not that being unremarkable should be a goal, nor that I even encourage it; it’s just that, for me, it served a purpose of avoiding big drama and simply getting me where I wanted to go — even if I had no idea where that was going to be.

So I’ve put together a sample story – a “chapter”, if you will – from what I hope will eventually become a memoir of sorts, a “story of my life” to one day pass down to our kids.  If you remember high school, and especially if you were a band kid, I hope you’ll get a kick out of it.


Chapter 3 – Band

High school band, specifically marching band, was a great experience, and a suitable alternative to sports.  I was terrible at sports.  My younger brother had proven decent at baseball in little league, but none of that talent made its way to me.  (He didn’t take it any further, either, so I don’t feel bad about it.)  It worked out that, after freshman year, marching band counted as phys-ed. credit, so I never had to take another P.E. class after the first one; I did anyway, but that’s another story.

Trumpet was my instrument.  Had been since 5th grade, after my father’s encouragement from having played the French horn back in his day.  I wasn’t that great at it — never made “first chair” (which means you’re the best at your particular instrument) or had any solos, but I towed the middle line satisfactorily.  I’d tried French horn before, but I never quite got the hang of it.  It’s a strange instrument, for a brass, in that you actually need to use your 2nd hand to hold and muffle the flared bell to produce subtle tone effects.  Trumpet’s a little simpler — you just purse your lips and blow, and press a row of 3 buttons to control note progression.  Having braces didn’t help; in fact, the position of the mouthpiece on the lips coincided exactly with the brace brackets.  But with a combination of inner-lip calluses and sheer will, I made it work.

I always admired and envied the “rock stars” of the band, especially the trumpet players who could hit those super-high notes with such ease.  There were two guys in particular — Jared and Mark. Mark was a junior, a lanky rude-boy (fan of ska & jazz) with spiky hair and a contagiously good attitude.  Jared was a no-nonsense senior who’d seen and done it all, making a great section leader.

Editor’s note: said Wifey should skip the next paragraph.  =P

And then there was our junior leader, Nicole.  Ooh boy let me tell you.  Picture a hot summer morning out on the football field for marching practice, icy water bottles being used to cool off sun-soaked sweat-beaded skin, and a tall tan teenage Cali-girl in short shorts and a rolled up tank top, telling us young’uns what to do and where to go.  Can I get a 2-syllable ‘day-umn’?  Yes, that first year of marching band was quite the eye-popper.

In order to truly appreciate this story, you need a basic understanding of the way high school marching band works.  It’s in the fall, or first semester of school, to coincide with football.  While we support and play at some home-games, our biggest commitments were “tournaments”.  These are competitions hosted by various large high schools where they invite a number of other schools in to display their marching band’s “field show”, which is basically a series of songs played while marching into various formations that look like shapes and figures from above.  Each band is judged on both their musical and visual performance.  The color guard, a small team of girls (usually, at least in those days), performs along with the band, by waving colorful flags and banners and doing some choreographed dancing on & around the field.  Think of them like cheerleaders, but more elegant, and replace the pom-poms with twirlers and the mini-skirts with more flowy dress-like outfits (sometimes.. though here were definitely other schools who pushed the sex appeal angle much more with their own color guard).

You also have to understand that, unlike a sports team, the band didn’t have locker rooms.  So essentially, the buses were our locker rooms.  We did probably 5 to 10 events in a given season, only one of which was our own self-hosted tournament, so we were on the road a lot — at least, it seemed like a lot to me.  The bus was our changing room for putting on our uniforms, our break area for chatting and hanging out between the performances and the awards, and our celebration circle (or, in worse times, our den of commiseration).  Different types of people put up varying degrees of protest or privacy — some had to be in the very back with complete coverage and make-shift curtains made from spare shirts or towels, while others were happy to flaunt their undergarments to most of their peers, probably in an effort to tease and woo the opposite sex.  I was somewhere in the middle (as usual); I hid behind the seat-back and kept it quick & subtle, but I also tended to wear a regular tee-shirt underneath the uniform.  The aforementioned Katrina (of my previous chapter) was always around to cast a flirty glance or suggest a extra spray of her favorite cologne to make the stank more bearable.

A small side-note. Our school colors were brown and gold — the Golden Bears — but this made an absolutely horrible color scheme for uniforms. The regular ones were a brown base with gold and white trim, but they never quite got the hue far enough away from ‘shit brown’. The alternate uniforms were a little better, having a white base with gold and brown trim, but of course, they got dirty much faster, so we didn’t wear them as often as I would have liked. I do hope they’ve come to their senses and changed up the color scheme, or at least tweaked the uniforms so that they don’t remind spectators so much of human waste. Thankfully the color guard’s uniform colors were more friendly, being of a teal & fuchsia variety.

Finally, the third key concept here, is that each band is in a “class”, which is like a ranking system based on your size (the number of band members). Generally, the larger, and richer, high schools — in our area, Rancho Bernardo, Poway, and a couple others from the wealthy areas of greater San Diego — had the biggest bands and were thus in the highest class, AAA.  We had historically been in AA (just below the top), and had, from what I heard in passing from the seniors, a decent ‘win’ history.  Depending on the size and attendees of a tournament, we could have even defaulted down to the same class as the others involved; i.e. if nobody else was above ‘A’, we’d compete as ‘A’ too.  But a strange thing was happening with our high school, in the sense that we were growing in number, but not necessarily in skill or in booster dollars.  The class system also hadn’t been updated in a while — basically anything over 150 was AAA , but those big rich bands I mentioned before tended to be in the 300s.  So unfortunately, we were basically “forced up” into the AAA class with our larger number, but we were still way outgunned and out-funded by those that had long held the candles in that high hall.

Now, having said all that, my first year in marching band was one of the most exhilarating, and it’s largely due to our first and only “sweeps” win in one of the first tournaments of the year.  A sweeps win is when your band wins the highest trophies in its class and in the tournament.  Looking back, there must have been a perfect storm of coincidences that led to it.  This was a relatively small tournament; none of those big rich bands attended, and we ended up being the largest one there.  I think it was hosted by Orange something-or-other high school.  The bus ride was a bit longer than most, maybe an hour or so.  Our uniforms were freshly pressed, having not been worn yet this season; and we’d barely finished mastering our show (the music and marching steps/positions, i.e. the choreography).

There was something in the air that night.

We arrived in the late afternoon, not too long before our turn was scheduled. We changed on the buses and lined up to take the field.  It was cool and temperate that evening, not too cold, but not warm enough to cause a sweat.  Perfect marching weather.  The emcee called out, “Tuh-MEC-you-la Valley High!”, and we took the grass.  It was well maintained for a small school; no big potholes or divots, clean and even yard-lines.  Our fearless leader, ‘H’ we called him — short for Mr. Hrbacek (her-ba-check) — took the conductor’s stand, counted it down, and the crisp snap of the snare drums meant it was on.

Our set was a big-band/swing theme, including “Moonlight Serenade” and “Sentimental Journey”.  We’d memorized pages upon pages of marching positions and music for this. Practiced dozens of hours — “sectionals” for an hour after school, those sweaty Saturday mornings, and every chance we could get at a field during class — it felt like hundreds.  Our feet were sure, our instruments were on-key and in-tempo, and we pulled it off, all the way to that final high note and conclusive closing drum beat.

The percussionists were always my favorite, even if I’d never admit it.  They were the driving beat that kept us all going, and the catching energy that fueled our desire to win.  Yeah, the brassy solos and deep booms of the tubas were great — hell, you’ve got to be a ridiculously strong dude (or dudette) to lug one of those bad boys around and march in tempo — but those drums made it all mesh together into something more than the sum of its parts.

So we left the field knowing that we’d gave it our all.  Yeah, we weren’t perfect, there were a few missteps and a few misplaced notes here and there, but we covered them up and soldiered on.  Thus, we took to the bus-changing-rooms once more, traded our uniforms for our street clothes, and gathered in the bleachers for the award announcements.

This was before the post-millennial days of “everybody’s a winner, everybody deserves a trophy”, but perhaps band culture was a bit ahead of its time, because almost everybody did get some kind of trophy.  Although that may have been due to the smaller size of this tournament, as I mentioned before.  Anyway, as with most competition awards, they worked their way up from the bottom to the top.  I wasn’t aware of this at the time, which made me quite confused as to why my elder band-mates were cheering progressively louder and louder as the announcers didn’t call our name. Obviously (now), it meant that we were toward the top.

The announcer has made his way to the final 3 awards – best musical performance, best visual performance, and the granddaddy of them all, “the tournament award”.  He calls the first.  “Best Musical Performance… Temecula Valley High!”  Loud but muffled cheers from our band as the director and seniors try to shush everybody.  “Best Visual Performance… Temecula Valley High!” Louder cheers from our mates as they struggle to contain themselves.  “And the Tournament Award goes to… Temecul–”

We erupt with elation before he can even finish the word.  Hoots and hollers, whoops and whistles.  Our director walks up to humbly accept the giant trophy, which I’m sure looked a lot bigger to us back then than it really was.  The stands empty of the competing bands as we make our way back to the buses.  The air is absolutely electric; high-fives and kudos abound, even between the flautists and the woodwinds, who are, for those of you unfamiliar with band sub-cliques, the quietest and most reserved of the bunch.  As we settle into our seats and prepare for the drive home, from a boom-box in the back of the bus come those timeless strains of Bryan May’s guitar and Freddie Mercury’s piercing vocals.  “Weeeee.. are the chaaaampions, my friennnd. Nooo time for looosers, cuz weee are the chaaampions… of the Woooooorld.”  The adults try to quiet us down, but this kind of celebration isn’t so easily subdued.  A few of the seniors try to explain that we got lucky, that we did ok but we mostly won because we outclassed the other bands.  And we knew, in the back of our minds, that it wasn’t always going to be this way; that jocks would still laugh at us and popularity queens would still snub us; that we’d be coming back on Monday to loads of schoolwork, and to the pressures and insecurities that go with high school life — particularly if you’re a band geek.

But damn if we weren’t gods in that moment.

And then, as the saying goes, it was all downhill from there.  That’s not quite fair, I suppose.  Heck, maybe I don’t give the old coot enough credit; perhaps he carefully planned this strategy of giving us an easy win to hit us with a taste of that sweet drug of victory, so that we’d stick around and keep trying harder, week after week, year after year, to replicate it.  Friggin’ brilliant, perhaps.  It never quite happened, as I said; we were hopelessly outclassed by those infamous high-society bands with their own logo-painted trailers and catered meals and mysteriously shiny pristine instruments that never seemed to fade.  Those top 3 award spots that I mentioned, well – let’s just say we got real tired of hearing the name “Rancho Bernardo”.  Over, and over, and over again.

The tournament that we hosted ourselves came towards the end of the season.  It was a nice break from the competition because, even though we had to perform – twice – we weren’t being judged.  So it gave those rock-star trumpet players time to show off their solo bits in a less subtle way.  In the first performance of the day, Jared actually popped out of line formation and did a half-kneel toward the crowd as he belted out those crisp 4 high notes – but in doing so, he flubbed just a bit, and he got crap for it later from H. and Mark.  Thus, at the night performance, he stayed in position, but absolutely nailed those notes, complete with a little trill-up and doo-wah.  There were a lot of bands here, more than almost any tournament we’d been to, it seemed.  I wondered why, but I’d come to realize later, after learning a bit of regional geography, that were we a convenient mid-way location between Orange and San Diego counties, so it made sense that those bigger schools wanted to come battle each other on the marching field without driving over 2 hours to either one’s hometown.

As the rest of the schoolyears dragged on, I would always look back fondly at that first exhilarating victory.  There was nothing quite like it.  Along with the occasional cleavage-peek on the bus, the weeks of pizza and coke on the road, and that Saturday morning navel-gazing at practice, it was enough to get me hooked for 4 solid seasons.  I even convinced my parents to buy me a Letterman’s jacket with the band letter in junior year. But the biggest adventures were yet to come.

T-SQL Tuesday #102: Giving Back

This month hosted by Mr. Major (scribnasium / @RileyMajor), whose past #tsql2sday post I may have missed but I love it all the same (BeyondCompare FTW!).  It’s about giving back to your community — mostly technical, i.e. the #SQLFamily / MS Data Platform community, but it can be about any community you’re a part of.

For starters, that’s what we blog for, at least partly.  I mostly blog about things I’ve learned or found interesting in my work, so that I don’t forget them later!  Of course, I’m always happy to learn that it helps someone else out there with a similar problem.  But there’s so much more to a community ecosystem than that.

SQL Saturday

SQL Saturdays are the perfect example of this.  I’ve been to 3 so far – 2 at Orange County, and 1 at San Diego.  I have to call out OC’s organizers, Ted Stathakis & .. the other guy.  Crap, I forgot his name.  He’s awesome though.  Srsly.  I talked with Ted a bit after a session at the last one and he truly is as nice as he is busy – 3 boys, all kinds of volunteer work, AND a full time job as the BI Director at Del Taco!  Wow!

I mean, it’s no Taco Bell, but still, Del Taco!  (Sorry, I had to..)  =P

I really want to volunteer to help at one of these.  I’m not sure what I’d do, but I know when the time comes, it’ll be worth it.

Lunch & Learns

To get my feet wet, I hosted a couple “lunch & learns” at my company with my Business Analyst and Development teams.  We talked about some basics, ranted about formatting a bit, tried to say goodbye to old habits from the SQL 2005 days (hello, date type!), and even dived into a few juicy things like performance-testing with IO stats and why you should never use scalar or multi-statement functions.  We also had a couple heart-to-hearts about DevOps and what it means in our environment.  (Hint: we’ve got a LOOONG way to go.)

At some point I’m going to scrub my slides and post them on SlideShare or something.  I just have to remove any company-specific info first.  ;o)

As with most teaching efforts, it helped me learn (and re-learn) some things myself!  I had to read some documentation to get wording exactly right, and I built a few playground DBs on my local machine to try out different concepts & examples.  Plus, it forced me to justify WHY I held certain opinions or notions on things, and in doing so, realize my own mistakes of the past.  Thus, I became a better DBA just by reinforcing some good practices and updating my own assumptions.

the-more-you-know
Yay learning!

Generosity

If there’s one thing I’ve learned from the many tributes to our late #SQLFamily member SQLSoldier, it’s the importance of being generous with your time.  Whether that means answering to the #sqlhelp tag on Twitter, participating in the SQL Community Slack, answering StackOverflow / DBA.StackExchange questions, or just taking a few moments to help someone at your workplace with a tech problem — it all makes a difference.  I need to be better about this, as far as my “online presence” goes.  In-person, I’m always eager to help, but because I work remotely most of the time (yay!), it’s a bit of a moving-target — the in-office days get packed with meetings and critical face-time (not FaceTime), so the peer-to-peer stuff is often postponed or relegated to Slack.

However, there’s a flip-side to this.  In being generous, we can’t forget to prioritize ourselves and our families.  I’m always being asked “Why are you working so much!?” — well, I could try to explain the difference between ‘work work’ and ‘tech community involvement’ and ‘self-betterment / career planning’ but… yeah.

Sorry, honey, be done in a sec!  =)

Anyway..

I encourage you, dear reader, to give back to your own communities, in whatever form that takes — tech/online, real-life, etc.  I promise that it will benefit you in some new & unexpected way.  Plus, whenever you make a real solid connection with someone, that’s worth its weight in gold.

it's not what you know, it's who you know
ORLY? Yes, RLY!

A SQL “Whodunnit” Trigger

Triggers aren’t bad, if used for the right reasons.. Here we look at an “audit-trail” use-case.

Inspired by a brief conversation in the #CodingBlocks community Slack: A short discussion and example of a “who-dunnit” (“who done it”, a colloquialism for a murder-mystery type thing) trigger, to find how what user is doing deletions against a certain table.

the cast of CSI LV
The original, or nothing.

The Background Check

Let’s name our hypothetical database CSI.  In it, we have a table, dbo.Victims, where it seems like data is being randomly deleted at random times.  As we all know, this is impossible — computers never do anything truly randomly, much less RDBMSes.

Insert witty counter-example here.  You know you have one.

So we want to find out who’s doing these deletions.  One DBA says, “Hey I got an idea… Let’s put an after delete trigger on the table!”  Another DBA says “I abhor triggers; let’s log sp_WhoIsActive every 5 seconds to try to catch the suspect ‘in-the-act’.”

Both approaches have their merits, and neither is that uncommon.  However, the latter is much more regularly blogged about, so I’m going to present the former, because it kinda helped remind me of a few things that I hadn’t used in a while.  I’d also argue that the latter is much less of a “guaranteed capture”, since you’re gambling pretty liberally on the fact that the delete transaction will even last that long; it’s statistically more likely that you’ll miss it.

The Setup

Here’s a SQL snippet that shows a basic after delete trigger created on our dbo.Victims table.  Notice the use of the special Deleted table reference — this is a “temporary, memory-resident” table according to the Docs, and it holds all the records that were/are-about-to-be deleted from the target table.

I feel like it used be called a “temporal table”, but that now refers to a new feature in 2016, where SQL keeps a hidden history-tracking copy of your table that you can reference like a time-machine; which, incidentally, almost* negates the need for such things as these triggers we’re talking about, but that’s another topic for another time.

*(The ‘almost’ is because temporal tables don’t tell you “WHO”, which is our primary motivator here.)

The interesting bits are how we identify our suspect, our ‘killer’ if you will.  See, we not only want to know who they are in the database context, we also (and likely, more importantly) want to know who they are at the server level context.  And just in case they’re impersonating another login, we want to check that too.

So we actually have a lot of options here.  There’s CURRENT_USER or USER_NAME(), for the DB context user.  Then we have SUSER_SNAME(), SUSER_NAME(), SYSTEM_USER, and ORIGINAL_LOGIN() for the server context.  If you’re curious, you could also get things like @@SPID (server session id), SUSER_ID() (server login id), and SESSION_USER (database session user).

ORIGINAL_LOGIN() may be the most potentially interesting, especially if we want to write our trigger with elevated (impersonated) permissions to be able to write to the logging table that we’ve set up to capture its detective-work.  I did not need it for this example, but it’s worth keeping in mind.

why dont you take a seat over there
We’ve been watching you…

The Sting

So we’ve got our evidence table, we’ve got our detective trigger, now we just need a suspect.  Thankfully we can test it out first, to make sure our operation will succeed when the real perp comes along.  We can do this, of course, by impersonation.  Or by using different SSMS query-windows with different logins — your choice.

Our faux-suspect’s login name is DummySuspect.  We map him to the db_datawriter and db_datareader roles in our CSI database — we know the real perp at least has write permission on the table dbo.Victims, otherwise he/she wouldn’t be able to delete those poor victim rows!  And we’re probably the db_owner, which is fine.  Let’s call our own login SergeantX.

Now we can pretend to be DummySuspect and execute a DELETE against CSI.dbo.Victims , and make sure it writes to our auditing table, which we called  aud.Evidence.

Yes, in practice, we’d probably want to put our Evidence table in a separate database, to really ensure those pesky Suspects can’t update it or delete from it, i.e. “cover their tracks” — here, I’ve settled for simply using a different schema, to keep the example workable.  Otherwise we’d have to deal with cross-DB permissions and such, which goes beyond the scope of one little blog post.

Ready?  Let’s try it!

The Proof

Go on over to the GitHub repo and check out the code.  There are 3 easy steps – ‘Step 1’, create the objects, including a new DB to house them called CSI.  You’ll see the trigger in there as well.  Then you can try ‘Step 2’, where I impersonate DummySuspect and delete a row from Victims, and then check the Evidence log when done.  And finally, ‘Step 3’ is a similar test, but assumes that you’ve actually connected that SSMS window/tab/query AS DummySuspect instead of impersonating him (or her!).  After you’ve done that, check out aud.Evidence again to make sure it logged the 2nd delete.

And there you have it.  A simple example of how to write and test an after delete trigger that writes the action info to a separate auditing table for investigation.

Hope you enjoyed!  Leave a comment here or on GitHub; I welcome all feedback.

One more thing…

Apparently I’ve been fork‘d!  Someone on GitHub liked my take on the Nested Set Model so much that they decided to pull it into their own library and have a play with it.  Yay!!  😀   Whoever you are, thank you and good luck building with it.  Enjoy!

someone forked my sql nested set model on github
Mr. Murray, thanks and have fun!

Favorite TSQL Tuesday #101 Posts

 

Since I didn’t even come close to making it in time for this month’s T-SQL Tuesday, I figured I’d highlight my 5 favorite posts from the community, and then share a few of my own tips/tools.

I use a Central Management Server too, and although I don’t often use it to run a query against multiple instances, it’s definitely a handy built-in feature to take advantage of.  A minor downside is that it only supports Windows Authentication (not SQL logins), so I can’t use it for my AWS RDS instances, nor for the CMS server itself — those I have to keep stored in my local “Registered Servers” section.  Another tool for running queries against multiple instances, with a good deal more flexibility, is Red Gate’s MultiScript, though it’s not free.  ;o)

Ethervane Echo, a clipboard manager and history-remember-er, is similar to something I use called Clipboard Fusion — in fact, it might even be better.  And who doesn’t love dbatools and dbachecks ?  If you’re not using them yet, don’t wait; start getting into PowerShell today by at least trying out some of the ‘get’ cmdlets from dbatools.

Telegraf looks absolutely stunning, as a monitoring system.  It does take some setup work and some maintenance, but it’d be a great branch-out learning opportunity to touch a few different technologies that a traditional SQL DBA might not normally think of.  Hats off to the people behind that, and may it continue to grow.

Leave it to Bert to go “outside the box” a bit, with these tools that help you be a better presenter and collaborator.  I use BeyondCompare, which is similar to WinMerge (tho, again, not free); I’ve fallen in love with its features that go beyond file diff/merge, but it’s nice to have a free option whenever I’m not on my main machine.

This is a broad sweeping post but it captures a LOT of what the community is and should be.  We’re inclusive, we want people to participate, grow & learn from each other, and ultimately advance their careers.  Tons of useful gems in here, from the Slack workspace to the event links to the networking advice.  Excellent stuff; go read it.

Honorable mention:

The SQL DB Modeler beta looks really interesting as an alternative to traditional big-$$$ tools like ER/Studio & Erwin.  If only I wasn’t stuck in brown-field legacy data models 95% of the time… =D

And finally, although they’ve probably been mentioned a few times already, pastetheplan and statisticsparser are two amazingly simple tools from the Brent Ozar folks that make sharing and comparing query performance so much easier.  My M.O. is to use PasteThePlan links in a dba.stackexchange post so that others can easily see the graphical execution-plan to offer feedback; while I use StatisticsParser to compare between A/B-testing runs of a stored-proc I’m trying to refactor & improve.

Where are all the Women?

This is partly inspired by the recent #StackOverflowPodcast episode in which Jon Skeet and the some of the Stack Overflow women talk about the Feminist movement and what it’s means to them.  It’s also partly because my wife is out of the house, and wanted me to do some painting while she was away.  And well… paint has to dry.

man-watching-paint-dry
It’s so exciting!

So I’m not going to even “go there” in terms of the movement in general, where I stand, or anything really deep, because it’s a huge iceberg of a topic and I can’t do anywhere near justice to it.  Even the discussion above was pretty basic & high-level, but they give lots of links in the show-notes to truly deep-dive into, if you feel like it.

One burning question, essentially, boils down to this:

Why is there still a striking disproportion of females in the Data Professionals career space (and, more broadly, the IT/Dev space in general)?

And like so many questions I ask here, I won’t actually have an answer for you (that would be too easy, wouldn’t it?).  Instead, I’ll simply reflect on my own (very limited) experience in working with women in tech, and hopefully get your thoughts in the comments as well!

Early Colleagues

In a very small software shop, there were actually no women for a time.  But shortly before I was hired, they brought on a husband & wife team – two developers who were excellent at what they did, and had, surprisingly, broken the stereotype of “spouses can never work together”.  She was so easy to work with, and I think my experience here helped me at least start to understand some of the nuances around “women in tech” and what it all meant.  Basically, she wanted to be treated just like “one of the guys” — which, I understand now, is, in itself, an anti-feminist phrase, but back then I wouldn’t have known — and it reflects the culture of the time, which is that this was a team of mostly male developers and we were still “finding our way” on the long trail of equality in the workplace.

So what this meant, in practical terms, was a couple things:

  • No bias for job-assignments of things like documentation, task management, or communication-centric tasks.  While yes, she was actually quite good at these, and would later become the de-facto PM and Scrum-master for us, it was understood (and probably stated) that this was not “because she was female”, this was because she was the best at it.  But again, is that specifically because she’s a woman?  I don’t think so.
  • Addressing the group as “you guys” was perfectly acceptable.
  • Pay was equal – at least between the equivalent roles & seniority levels (e.g. her & her spouse).  You don’t typically share or discuss salaries among peers, but we knew, and our bookkeeper ensured, that this was true.  Because if it wasn’t, someone would’ve had some words about it.

Also, there were a few positive aspects of the culture that helped make it more equality-apparent, which I think were just byproducts of the quality of people hired.  We didn’t do “dirty jokes” or have sexist (even unintentionally) discussions, nor did we engage in gossip or any kind of “just the guys” activities.  We really just did the work and kept it professional, and any time we were outside the office together, it was almost always shop-talk.  I think that’s the nature of a startup — you really don’t have time for anything else, any of the “fluff” or crud that spawns from idle hands & minds.

But it wasn’t all roses & sunshine.

Its-Not-All-Roses-and-Sunshine-Right-Now-But-Im-Working-On-It

A New Female Developer Candidate

After that dev moved on, we knew we had to replace her.  And the company workload was pivoting a bit, so our candidate criteria weren’t the same as those of her position.  But putting it that way makes it sound like we were either specifically looking for someone different, or that we had moved somebody else into her position and now had a completely different role to fill.  Neither is the case, really; with a startup that’s organically growing and shifting, you don’t get the luxury of well-defined roles.  You basically need what the business and the team needs at the time, and that becomes your reality, until it’s not, and your team pivots again to fill the new mold, learning & growing along the way.

So anyway, we were hiring for a somewhat nebulous developer position.  And one of the candidates we saw was female.  We did not end up hiring her — unfortunately, in my opinion.  That’s not to say the candidate we did hire was bad; he was great too, that’s just not relevant here.  After her interview, the discussions we had were interesting.  And I think it would have been greatly beneficial if the previous dev (the woman I talked about above & who had left recently) could have been present to offer her insight into the hiring process; but she, understandably, was not available & already busy with her new stuff.

This new candidate had a good deal of “embedded systems programming” background, which was interesting because it was not at all what our software was about, but in hindsight, probably could have proved valuable in making our SDLC processes leaner & more efficient.  She also had great general tech skills and was a quick learner.  But ultimately the reasons not to hire came down to the dissimilarity of background vs our product, AND her personality as we perceived it — in a word, she was “nervous” and “not confident”.

This is a big failure, in terms of equality/feminism.

And as I said, this is all purely hindsight.  None of us realized at the time what we actually meant.  But that’s no excuse, just history.  So let’s unpack that a bit.  Or, in other words…

duh
Ya think?!?

DUH!!

Of course she was nervous!  She was A) in an interview, and B) surrounded by men.  It’s not like we said anything or acted in a way that was actually misogynistic; we’d like to think we’d learned how to be open & equality-centric enough that anybody would feel welcome and able to talk about their experience and why they’re a good fit for the job & the company.  We didn’t even have much of a “culture” to speak of — it’s not like we were a big enough team to even have cliques or established norms, we just kinda discussed our work, did the work, collaborated on the work, and went home at the end of the day to our families/friends.  However, in the same breath, we DID have a “culture”, in the sense that we were a small tight-knit team (while in the office) with a set of personalities that, so far, worked very well together; and on a small team, personality-compatibility is important.

Anyway, here’s the crux.  We didn’t even recognize that what we were saying was, underneath, an anti-equality statement:

She should have been more self-confident than the average male candidate, in an interview, in order to meet our expectations.

Now obviously, if you ask the hiring manager (aka owner/CEO/president/founder) of the company and the HR person, they’ll agree (rightfully so) that she was not hired due to the gap in technical experience & the fact that her skills did not fit with what we needed.  And this is true, as I said before; we were doing web-based software in ASP.NET, with a SQL back-end, and none of those were at the top of her skill-set.  So I’m not self-flagellating for us passing on her as a candidate.  (Again, the person we did hire worked out just fine.)

I’m acknowledging, and apologizing for, the fact that we elevated her (completely understandable) personality/disposition to an artificially high importance in our discussion about the candidate.

That, I think, is what an equality-minded leader would try to make sure we avoid next time.  If she had had very similar experience and skills to the next candidate, we should have certainly hired her.  And if I were to retroactively predict the past-future (breaking all kinds of grammatical rules in the process), had she been hired, she’d have “come out of her shell” and gotten along swimmingly with the team & the work.

But again, this is all ancient history by now; it’s not like we can go back and change our decisions.  We just need to be conscious of them and learn from them.

much to learn you still have
“But I’m trying…” “Don’t make me say the line again.”

To Be Continued…

This is getting quite long (~1400 words), so I’ll break here and save the rest for another post.  In which I’ll actually address the initial question, as well as my current experience related to female colleagues & a larger workplace.  Stay tuned!

TSQL Tuesday #100 – Predictions for 2026

Yeah so I missed the boat by a few days week.  That’s pretty much my M.O.  This month’s T-SQL Tuesday #100 is hosted by the author of sp_WhoIsActive and the creator of T-SQL Tuesday himself, the legendary, the incomparable, Adam Machanic.

applause-please
You ain’t never had a friend like the SQL blogger community ;D

The Year is 2026

Despite IT’s best efforts to kill the relational database, it’s still alive and kicking.  Sure, it’s mostly in the cloud, and we’ve largely solved the problems of scalability, availability, and “traditional” maintenance, but the DBA still plays a critical role in the IT organization.  He/she is more of an architect and an automator, someone who understands the business and development needs as they relate to data — its storage, availability, security, and performance — and can leverage cohesive data platform technologies to provide those services and meet those needs.  But the fundamental issue of data quality still haunts even the best environments, because at the end of the day, when you rely on a human to enter text into a field, you’re gonna get garbage inconsistency.  Thus, we’re still fighting that fight, if only to appease our “data scientists” and machine-learning models so that they stop whining about it.

SQL Server itself has evolved.  After realizing that it was pretty silly to bolt-on a hacky “graph db” component to what is, at its core, a relational engine, MS broke that off into its own product, “Microsoft GraphDB Server”.  But the good news is, SQL & GraphDB talk to each other seamlessly; in fact all of the data-platform products integrate and inter-operate much more smoothly than 10 years ago.

We finally have a single unified CE (Cardinality Estimator), which is intelligent enough to know which paths/plans to use for a given query, so we don’t need to mess with those awful trace-flags anymore.  Indexes and Statistics are all but self-maintaining; the DBA rarely has to step in and mess with them.  Part of the reason for this is that SQL Server yells at you if you try to make a GUID the clustering-key, or other such nonsense.  =D

Columnstore is everywhere; traditional row-store (b-tree) indexes barely exist.  JSON storage & indexing inside SQL Server is much better, but it’s still preferable to use a document-store DB if you can.  Hierarchical structures (not to be confused with graphs) are easily implemented and supported, without having to resort to old hacky models.  And user-defined functions (all types) perform nearly on-par with stored procedures.

They’ve replaced sp_who and sp_who2 with the code from sp_WhoIsActive, and made SSMS Activity Monitor suck less & actually be semi-useful as a basic first-response monitor.  Profiler was officially killed off, and XEvents has come into general widespread usage — largely because MS finally dedicated some hard-core dev time to improving its GUI & making it much easier to use.  Native Intellisense finally works, and works well, for all but the most obscure/weird things, and is much less chatty in terms of network traffic to/from the server.

And finally.  FINALLY.  Each database has its own TempDB.

and there was much rejoicing.. yay
We’d only been asking for it for.. 10 years?

T-SQL Tuesday #99: Counting Rows the Less-Hard-Way

We can get our row count, and min & max date values, without ever touching the actual source table!

This month’s invite courtesy of Aaron Bertrand (B | T), whose “bad habits” blog series still inspires many an impassioned debate or engaging argument discussion on a regular basis among DBAs & Developers alike.

And yes, I’m taking the easier of the two “dealer’s choice” choices — the SQL focused one.  (I’m not big on sharing/blogging personal stuff, at least not here; I may one day start another blog for that, or perhaps just occasionally post more #off-topic stuff , but for now you’ll have to be content with my stories of vehicle troubles and the occasional movie-geekery).

So, without further ado…

By the way, what is ‘ado’ and why should there be no further of it?

art-vandelay-importer-exporter
Accidentally apropos on many levels…

Counting Rows in Really Big Tables

Previously touched on here, tables of unusual size (TOUSes) can be tricky.  You don’t want to lock them up for a long period of time, but you often need to gather information about them (such as row count, size, range of values) to perform some kind of operational maintenance with/on them.  In particular, Aaron’s post on “counting rows the hard way” inspired me to look into this a bit more and try to come up with a clever-ish way of finding out some basic “shape of data” info without actually querying (scanning) the table itself.

To start with, it’s actually really simple to get the total row-count from a few system catalog views — Aaron’s already shown you that, so I won’t repeat.  My interest is more in questions like “How many rows match a where-clause?” or “What are the min & max values for thatColumn?”

For this post, I’ll be focusing on a particular kind of table — the “history” or “transaction” table.  The idea here is that you have a record of “every time some event happens in/to some entity”.  A very common example is audit-trail tables, which I’ve been dealing a lot with lately.  Another common example is a “transaction history” table, such as, in our new favorite MSSQL demo database WideWorldImporters, the table Warehouse.StockItemTransaction​.  It’s the 2nd largest table in the db at 260-some-thousand rows.  {The largest is a multi-million-row beast that is actually the system-versioned aka temporal table behind a “normal” table; I might build a phase-2 example around this, but not today.}  So, while our queries won’t be super slow, we’ll get enough of an idea of what’s bad & good from measuring our IO stats (with SET STATISTICS IO ON).

TL;DR: The demo script is available here; the headers below correspond to the comment-lines of the same name, but I’ve left enough commentary in the SQL itself to keep the average reader on-track, so feel free to check it out ahead of time.  But do keep reading at some point!  :o)

A. Gathering Some Intel

First up, we have good ol’ sys.sp_spaceused.  This gives you some sizing info about the table, including its row count.  There’s a “disclaimer” circulating around out there that it’s not “up to the millisecond accurate” , i.e. it might not have the most current row count if someone else is in the middle of an insert operation or whatnot.  But for all intents & purposes, you can consider it truth.

Then you have the “hard ways” that people typically use — and that Aaron, again, covered just fine in his post on the subject, so I won’t spend any more time there.

But what if I want to count rows matching a where clause (a predicate)?  And in dealing with a typical history/transaction table, the predicate is almost always “between such and such dates”.  (Of course we won’t literally use the between operator, as we have been chastised severely; we know best to use >= and < !)  Also, I want to know the MIN and MAX of said dates in the table.  Lord knows we’re gonna be doing some table-scanning.

B. Ok, Let’s Try an Index

In their benevolent wisdom, the SQL deities decided not to give us an index on WideWorldImporters.Warehouse.StockItemTransactions.TransactionOccurredWhen.

BTW, how’s that for a verbose column name?  What, TransactionDate not good enough?  I suppose it isdatetime2 after all, but still…

So we create an index on it, to see if that helps our poor “count where dates” query.  And behold, it does!  We’ve cut our # of logical reads down by about 90% (from 1900 to 200, if you’re following along in the script).  That’s fantastic, but… we can do better.  Because if the table is, say, 500 million rows instead of 260k, that’s about 400,000 logical reads, which.. could definitely suck.

C. The Better Way

Again, the script has an ode to Aaron’s query on sys.partitions/tables to get the row-count from the meta-data.  Then the real fun begins.

There’s a system DMV (or probably ‘DMF‘ – dynamic management function) called sys.dm_db_stats_histogram, which takes the table’s object_id and the index’s index_id as arguments.  It gives you, obviously enough, the statistics histogram of the statistics object corresponding to that index.  We want to store its output in a temp-table (or even a real table — go nuts!) so we can query it some more.

--For example, if our new index is index_id 7:
sys.dm_db_stats_histogram(OBJECT_ID('Warehouse.StockItemTransactions'), 7)

So we create our #StatsHist table (“hist” being an abbreviation for “histogram”, not “history”, though in retrospect that’s probably not worth the possible confusion), and we populate it with the meta-data from Warehouse.StockItemTransactions and its new index that we just created (on TransactionOccurredWhen).  Poof!  We have an easy way of showing min/max values in that column!  Well… almost.  We have to convert the variant datatype to an understandable & aggregate-able (probably a made-up word.. aggregable? aggregatable?) type.  So we add a column range_hk_proper of type datetime2 and populate it with the converted values of range_high_key from the stats-output.

There!  Now we’re cookin’ with gas.  Our min/max/count query, and our “count where date-range” query, run in mere milliseconds, without ever touching the actual source table.  So we don’t lock it up or block anybody else from writing to it, even in the most pessimistic isolation levels.

Except when you created that index we needed on the date column.

Yes, I know.  What we’re hoping is that the tables we deal with in the “real world” already have such an index that we can take advantage of.  If not, well, that’s what maintenance windows are for.  And you better believe you’re gonna need that index sooner or later.

cooking-with-mustard-gas
Been a while since I used a Family Guy meme….

Where To Next?

Ostensibly, this whole thing could probably be turned into a stored-proc so you could run it “on demand” for any table that had a date or datetime column which you wanted to get such information about.  It’d have to do a lot of error-checking, of course — it wouldn’t work if you don’t have such a column, and if there’s no index on it, and probably a myriad of other ‘gotchas’ that I’m not thinking of at the moment.  But I did try to lay the groundwork for improvement. #StatsHist stores schema & table name too, so if you felt like turning it into a mini-data-warehouse holding a BUNCH of stat-histograms for a whole mess of tables, you could definitely do that.  And then you could run some basic analytics on it — min/max/avg, counts by year/month/day, etc.

Sounds like fun, no?  ;o)