Dirty Laundry

It’s time for a more thought-y, less tech-y post.  Which is mostly my excuse for not wanting to write a bunch of code at the moment.  But that’s how I started this blog, with mostly opinion pieces, trying to offer some critical thinking on how DBAs and Developers work together.  So y’all better like it!

Today’s title is brought to you by Don Henley’s tune of the same name, which is now stuck in my head, thankyouverymuch.

dirty laundry goes in a basket not in a database
Paint.net is my friend… =D

This is about data quality.  When you have “dirty data”, just like dirty laundry, and you let it sit unattended, it starts to smell.  In software, this means the “badness” seeps into other areas of the environment, affecting systems and business processes that should otherwise function smoothly.

code smell is a surface indication that usually corresponds to a deeper problem in the system.

-Martin Fowler

And, more aptly:

Data quality is corporate America’s dirty little secret.

-Paul Gillen

But what is dirty data?  Generally, it’s anything that doesn’t quite fit the ideal data model — that perfect vision of how all the bits of information in the system fit together, the shape of each data entity and how they relate to each other.  Mostly, dirty data is what happens when you allow users to type things into text-boxes, and you write those text-box contents straight into the database without any layers of validation or cleansing.  (Coincidentally, that’s also how SQL injection happens, but most of us have been scared-straight by enough years of security bloggers hammering at our thick skulls — and our favorite XKCD — that we at least sanitize our inputs before dumping them to an INSERT statement.)

Let me take a recent example from my experience.  We have an ERP system that doubles as our CRM system (which is already a pair of bad idea jeans).  How do you think customer information gets into the database?  Customer Service Reps, typing stuff.  Usually by copying from a paper form.  Or the customers themselves, using an online form.  But guess what doesn’t happen in either case?  If you said “USPS address validation“, give yourself a hand!

joker give yourself a clap
Oh goooood for youuuuuu…. </Christian Bale>

Now, being that this system is our “source of truth” for customer info, it stands to reason that lots of other business functions & processes depend on it.  For example, let’s say we send a promotional calendar to our customers of a certain “subscription level” on a yearly basis.  We’re not in the publishing business, so we contract this out to another company.  But guess what they need from us in order to complete the job and mail out those calendars?  Addresses!  So what happens when there’s a bad address in our database?  A calendar gets returned, wasted cost and materials.  Multiply that by a couple thousand and you start to turn a few heads in the C-suite.

Later, around the Marketing table, someone has a brilliant idea that they need to run a mail-merge to send out a gift-package to the top 100 customers.  So they ask the DBA for a list of said customers.  “Sure!  Here ya go, here’s a report.”  And then the complaints start coming in.

“These customers aren’t active anymore.”

Then tell your CS reps to mark them as inactive in the system.  But no, we don’t do that, we just write “inactive” in the FirstName field.

“These ones are employees.”

Fine, figure out a special indicator to add for that, so I can exclude them from the report.  But no, of course, we can’t do that either; we just put “deactivated” in the FirstName field.

“This guys is dead.”

Yeah, not even kidding.  Apparently the powers-that-be decided to keep his info in the system, but type in “deceased” to the “Address 2” line (in the US, this is customarily the apartment/suite/unit number).

he's dead jim
Let’s beam him back up but write “deceased” on his badge, that’ll be sufficient.

But mostly, the biggest complaint is that we’re getting un-deliverable/return-to-sender when we try shipping out to some of these addresses.  And why?  Because they’re not subject to any external validation and quality-control.

So what’s the data professional’s responsibility in this?  In my opinion, it’s to advocate for data quality.  There are obviously big vendors out there like Melissa Data who will sell you a service to help get you there.  APIs abound, from USPS and other official sources, so building it isn’t out of the question.

One potential roadblock is, as usual, conservatism.  The business’s ERP system is its life-blood, highly sensitive to change and very guarded by over-protective management and finicky executives.  But the smelly dirty data-laundry continues to cause problems and has real-money impacts on corp. efficiency and profit.  Unfortunately, many people tend to take the ostrich approach.

if you bury your head in the sand your ass will get burnt
No idea who this Bennett person is, but they sound smart.

So, my good people, start “doing your laundry”.  Have those conversations with your teams and managers about the current state of your data quality, and what it’s going to look like moving forward.  Make some plans, have a road-map, and understand that it’s going to involve a lot of collaboration between key players.  And good luck!

Advertisements

Stupid Defaults

Every big enterprise product has them.  I just happen to pick on SQL because it’s my area of expertise.

This week’s topic was triggered by an amazing quote from Adam Machanic (b|t) in the sqlcommunity Slack group, where he was trying to help someone troubleshoot a slow linked-server query:

That default, I consider to have been checked into the SQL Server codebase directly by Satan.

satan-costume-guy-laughing
I imagine he looked something like this… Satan, not Adam.  =D

He’s referring, in this case, to the default option of remote proc transaction promotion, or “Enable Promotion of Distributed Transactions for RPC” in the GUI, which is set to True by default.  Admittedly, linked-servers are a dicey area of the technology and not everybody needs them, but when you do need to deal with them, there are some caveats to consider and a few potholes to avoid.

I won’t go into gory details, but the mile-high gist of it is that you should probably change a couple of the default linked-server settings when you’re going from MSSQL to MSSQL (which most of the time, you are): rpc and rpc out both to true, and the above dude to false.  The reasons behind that are subtle, but it boils down to usage patterns:  If you’re purely running basic SELECT statements against the remote server, you can leave this stuff alone; but if you want to do cool stuff like exec MyLinkedServer.RemoteDB.sys.sp_executesql '--some awesome dynamic-sql'​, you’ll want to change these.  (That last bit comes in especially handy if you’re building a dynamic query string, then running it against the remote server to bring in the results to a #temptable to further massage/mangle/munge said data.)

Even though you probably really shouldn’t be doing that in the database (that’s what web server farms are for!).

stack of colorful rings with arrow pointing up
push it up the stack!

So, what are some other “stupid defaults” in SQL Server?

Every big enterprise product has them.  I just happen to pick on SQL because it’s my area of expertise.  And it’s not even just “defaults”; there are some options which shouldn’t even be a thing — they should be completely and irrevocably in love with Edward removed from the product.  Yet, because the RDBMS tech space is infamously conservative and slow-to-change (the accepted euphemism is “mature“), these options and defaults have remained in the product despite our best attempts to convince MS that they’re heinous.

1. Parallelism settings (server/instance level)

Your servers have many-core CPUs, right?  And you want SQL to utilize those cores to the best of its ability, distributing the many users’ workloads fairly amongst them, yes?  Damn right, you paid $3k or more per core in freaking licensing costs!  “OK”, says SQL Server, “I’ll use all available CPUs for any query with a ‘cost’ over ‘5’“.  (To give context here, in case you’re not aware, ‘5’ is a LOW number; most OLTP workload queries are in the double to triple digits).  “But wait!”, you protest, “I have more than 1 user, obviously, and I don’t want their horrible queries bringing all CPUs to their knees and forcing the 50 other user queries to wait their turn!”

well... tough shit
-SQL Server

Hardly seems fair, no?  But those are the defaults, have been for over a decade, despite how much hardware has evolved since.  Just for completeness, I am talking about the following 2 options, which you can view by right-clicking on the instance in SSMS, or by running sys.sp_configure.  They are max degree of parallelism and cost threshold for parallelism, respectively (# cores to use, and how expensive the query should be to “go parallel” — as opposed to serial/single-threaded, which is perfectly fine for those itty-bitty-teeny-tiny queries that are so quick that 1 CPU core is more than enough to handle them).  We commonly abbreviate the former as maxDOP; the latter is less commonly abbreviated, but because I’m a sucker for acronyms, I like to call it CTFP, or maybe costFP if I’m feeling verbose.

Now obviously you can, and should, change these settings.  “To what?” you ask.  Well, the answer, as always my friend, is “It Depends ©®™“… a perennial favorite of DBAs and consultants alike.  But don’t ask me — there are plenty of people much smarter than I with blog posts on the topic.  If you put a gun to my head, for CTFP, I’d say “pick your favorite number between 50 and 100, start there, and test to see if it makes your server CPUs happy or sad”.  And for maxDOP I’d say “divide your # of CPU cores by 2 (if you have 8 or fewer) or 4 (if you have 16 or more)”.

And if you have somewhere between 9 and 15 CPU cores, don’t ask me, because you’re running some.. interesting hardware.  Figure it out yourself, and get that gun out of my face!

OK, I know 12 cores is probably not an unreasonable config, so.. extrapolating my logic above, divide by.. 3?  Yeah let’s go with that.

squirrel with "ship it" flag
ship it good

2. Auto Close (database option)

It’s at the top of the list in the GUI under Database Properties -> Options.  Yet nobody knows why you would ever enable it.  I once heard a community member theorize that it might have been put into the product back when it was considered a viable “local persistence option for Windows CE/Mobile apps” and it would help w/ resource management on said mobile device by freeing up resources when the DB wasn’t in-use.  Well, we all know how well that product line did in the market (hint: poorly).  There are so many better options for localized data stores in mobile dev, MS isn’t even a blip in the conversation.  (Again, talking local data persistence.)

If we’re talking cloud, MS is a big part of that conversation — Azure is amazing, and a solid competitor to AWS & GCP.

Anyway, if you ever find a SQL DB with the auto_close option enabled, find the person responsible, and slap them with a trout.

stick figure chasing another with purple trout
I will find you, and I will trout-slap you…

3. Server Max Memory

Last one for today.  This is something that’s not completely heinous, but could use a LOT more built-in intelligence during the installation process so that DBAs & SysAdmins didn’t need to think about it so much.  SQL Server will, by default, sets its max-memory to some-odd-billion-MBs (technically it’s the max value of a 32-bit int, which the more geeky among you have probably memorized), which is of course some-odd-million-GBs, which is more than even the most bleeding-edge servers have to date.  Which is fine in theory — you paid a crap-ton of money for this system, it might as well use up all the RAM that it can to perform to its potential, right?

all your ram are belong to us
in AD 2101…

Right.  Until you realize that “Oh wait, it’s running inside an OS” (whether that’s Windows or Linux, thanks to 2016 & 2017 product versions) — that that OS needs some RAM too, to keep itself running!  (Cue the Linux zealots with their “it’s so much more lightweight than Windoze, haha!!1” — yeah, well you still need some memory space, don’a ya?)

Here’s what I’d like to see, in my ideal world.  During SQL Server installation, it would detect how much RAM is on the system, subtract about 10% or 4-8 GB, leave that for the OS, and use the resulting number as its limit.  Boom, done, nobody has to think about configuring it and checking off another checkbox on their setup checklist.

But noooo… The vaunted MSSQL engineers can built all sorts of amazing things into the product like QueryStore, Adaptive Query Processing, and The Artist Formerly Known as Hekaton, but heaven forbid we get a little more intelligence in the installer.  It got a lot  better with 2016 when it let you configure tempDB reasonably correctly (multiple files, different locations, etc), but there’s still a LOT that could use some lurv.

two kitties cuddling
because KITTEHS!!

Do you have a favorite “stupid default” or “horrible setting” related to SQL Server or any other technology that you work with?  Share in the comments!

DBA Holy Wars Part 2

Battle 4: GUIDs vs Identities

This is an oldie but goody.  A) Developers want their apps to manage the record identifiers, but DBAs want the database to do it.  B) Developers prefer abstracting the identity values out of sight/mind, DBAs know that occasionally (despite your best efforts to avoid it) your eyeballs will have to look at those values and visually connect them with their foreign key relationships while troubleshooting some obscure bug.

but-wait-theres-more-billy-mays
there’s ALWAYS more…

But there’s more to it than that.  See, none of those arguments really matter, because there are easy answers to those problems.  The real core issue lies with the lazy acceptance of GUI/designer defaults, instead of using a bit of brainpower to make a purposeful decision about your Primary Key and your Clustered Index.

Now wait a minute Mr. DBA, aren’t those the same thing?

NO!  That’s where this problem comes from!

A good Clustered Index is: narrow (fewer bytes), unique (or at least, highly selective), static (not subject to updates), and ever-increasing (or decreasing, if you really want).  NUSE, as some writers have acronym’d it.  A GUID fails criteria ‘N’ and ‘E’.  However, that’s not to say a GUID isn’t a fine Primary Key!  See, your PK really only needs to be ‘U’; and to a lesser extent, ‘S’.  See how those don’t overlap each other?  So sure, use those GUIDs, make them your PK.  Just don’t let your tool automagically also make that your CX (Clustered indeX).  Spend a few minutes making a conscious effort to pick a different column (or couple columns) that meet more of these requirements.

For example, a datetime column that indicates the age of each record.  Chances are, you’re using this column in most of your queries on this table anyway, so clustering on it will speed those up.

Most of the time, though, if your data model is reasonably normalized and you’re indexing your foreign keys (because you should!), your PKs & CX’s will be the same.  There’s nothing wrong with that.  Just be mindful of the trade-offs.

Battle 5: CSV vs TAB

bluray-vs-hddvd-fight
Who doesn’t love a good format-war?

Often, we have to deal with data from outside sources that gets exchanged via “flat files”, i.e. text files that represent a single monolithic table of data.  Each line is a row, and within each line, each string between each delimiting character is a column value.  So the question is, which is easier to deal with as that delimiter: comma, or tab?

String data values often have commas in them, so usually,the file also needs a “quoting character”, i.e. something that surrounds the string values so that the reader/interpreter of the file knows that anything found inside those quotes is all one value, regardless of any commas found within it.

But tabs are bigger.. aren’t they?  No, they’re still just 1 byte (or 2, in Unicode).  So that’s a non-argument.  Compatibility?  Every program that can read and automatically parse a .csv can just as easily do so with a .tab, even if Windows Explorer’s file icon & default-program handler would lead you to believe otherwise.

I recently encountered an issue with BCP (a SQL command-line utility for bulk copying data into / out of SQL server), where the csv was just being a pain in the arse. I tried a tab and all was well! I’m sure it was partially my fault but regardless, it was the path of least resistance.

Battle 6: designers vs scripting

no-wizard-allowed
Wizards are usually good, but in this case, they’re lazy and bad for you…

This should be a no-brainer. There is absolutely no excuse for using the table designer or any other wizardy GUIs for database design and maintenance, unless you’re just learning the ropes. And even then, instead of pressing ‘OK’, use the ‘Script’ option to let SSMS generate a `tsql` script to perform whatever actions you just clicked-thru.  Now yes, admittedly those generated scripts are rarely a shining example of clean code, but they get the job done, even with some unnecessary filler and fluff.  Learn the critical bits and try to write the script yourself next time– and sure, use the GUI-to-script to double check your work, if you still need to.

Confession: I still use the GUI to create new SQL Agent Jobs. It’s not that I don’t know how to script it, it’s just that there are so many non-intuitive parameters to those msdb system-sp’s that I usually have to look them up, thereby spending the time I would have otherwise saved.

Bonus round: the pronunciation of “Data”

its-data-not-data
Call me “big Data” one more time…

Dah-tuh, or Day-tuh?  Or, for the 3 people in the world who can actually read those ridiculous pronunciation glyphs, /ˈdeɪtə/ or /ˈdætə/ ?  It’s a question as old as the industry itself… or maybe not.  Anecdotally, it seems like most data professionals, and people in related industries, tend to say “day-tuh”; while those in the media and generally less technical communities tend to say “dah-tuh”.  (Where the first syllable is the same vowel-sound as in “dad” or “cat”.)  This likely means that the latter is more popular, but the former is more industrially accepted.

In either case, it doesn’t really matter, because at the end of the day, we’re talking about the same thing.  So if some dogmatic DBA or pedantic PHB tries to correct your pronunciation, tell ’em to stop being so persnickety and get on with the task at hand!

Until next time…

DBA Holy Wars

On a lighter note than usual, I thought it was time I weighed in on some of the long standing “programmer holy wars”, but with a little DBA-twist (like a twist of lime, only less delicious).  Like any good holy war, this will be full of posturing, pontificating, and political correctness.  And I probably won’t even commit to a particular side on some issues.  But hey, isn’t that the point?

Battle 1: Tabs vs. Spaces

tabs-vs-spaces-run-away
OH SHI…
Text editors and IDEs have long been mature enough to handle “smart tabs” and preference-based tab size.  However, you will occasionally have to copy-paste code into a non-code-oriented environment, such as an email or a document, where of course the tab size is based on inches rather than spaces in a monospace font.  I will admit in those rare instances, tabs are annoying.  But what is more annoying is the inconsistency you can get when spaces are used incorrectly, especially in the midst of lines in a sad attempt to do some kind of vertical alignment.  Plus, if you happen to have a different spacing-size preference than the original code author, you’re now battling that visual discrepancy as you read & maintain said code.

So I prefer tabs.  But I won’t fight my team on it if everybody else prefers spaces — that’s what those settings in the editor/IDE are there for!  I will happily conform with the best of them.  A quick Google says I’m in the minority anyway — which I’m OK with.

Battle 2: The Case for Casing

capslock-cruise-control-for-cool
The original, if somewhat dated.
Certain languages (COBOL, SQL) have a historical bent toward ALLCAPS for their keywords and language constructs.  Some argue that this is archaic, outmoded, etc.  I don’t mind it, working primarily with SQL, but in almost all other languages (C#, Python, JavaScript), I think it makes sense to follow the established conventions, and modern conventions never favor caps.  As I transitioned from C# to SQL, I actually wrote my scripts and stored-procs primarily in lower case for the longest time.  And then I came into an environment where RedGate’s SQL Prompt was in heavy use, and since its default “auto-format” settings are in-line with the SQL language “standard” (however old and dated it may be), it started YELLING all the keywords at me.. and like most people, I just accepted it, eventually letting it become my own “default” style.  (SQL Prompt is a fantastic tool, don’t get me wrong.  I absolutely love it, but its default formatting settings never agreed with me — then again, nor do anybody else’s, as we already discussed!)

But that’s not really what this battle is usually about.  Most often, it’s about your names, i.e. the identifiers for objects/methods/variables/procedures/APIs/etc. that your team and your developers have to come up with on a constant basis.  And usually it comes down to camelCase, TitleCase (which are often incorrectly used interchangeably!  and is apparently better known as PascalCase, which I just learned today, or possibly re-learned after several years), or lower_case_with_underscores (which, in another learning moment, I discovered is named snake_case!  How cool is that?).  Rarely, if ever, do people argue for ALLCAPS in these areas — it just feels.. obnoxious.

caps-lock-not-always-necessary
Yelling doesn’t always get you what you want…
As with any programmer-y topic, you can dive down the rabbit-hole and dissect layer upon layer of nuance in this battle until you’ve lost all semblance of productivity.  Because casing is, in some languages, important; while in others it’s simply convention-based, dependent on the abstraction level or family of things you’re talking about.  For example, C# Class names are TitleCase, and so typically are Methods, while object instances are usually camelCasepublic members can be TitleCase or camelCase, and private members can be _underscore_led, or whatever flavors for each that your boiler-plate/template system prefers.  Scoped variableNames are most often camel’d as well, while global constants are typically CAPS_WITH_UNDERSCORES.  And god help you if you ask a team of more than 3 people what their dependency packages’ names should look like.

omg-object-and-casing
Shamelessly borrowed from Adam Prescott’s blog, which you should definitely go read.
So in this battle, I have to play Switzerland.  I’m not vehemently opposed to any particular flavor of casing, finding it best to work within the conventions of the language and tool-set at hand.

Side-battle: Spacing in Names

That said, I can’t stand names/identifiers with actual white space in them, but that’s a somewhat different battle.  Most languages don’t even allow that, but most RDBMSs will happily accept your ridiculous My Cool Database and its resident Silly Tables and Happy Column 1/2/etc. as long as you properly “quote” them (surround them with [square-brackets] or `backticks`, depending on the SQL flavor).  If you submit that kind of nonsense to me, I will find you, and I will slap you with a large trout.

trout-slap
Particularly offensive names may warrant a double trout slap.

Battle 3: ORM vs Stored-Procs (vs Linq?)

This is that little twist-of-DBA as promised.  I recently read an interesting post related to this topic, and essentially the point was this: Developers have “won” (won what? I thought were all on the same side!), the ORM is here to stay, and as DBAs/DBDevs, we (you/I) need to build up our understanding of them so that we A) know them even better than our devs, and B) can troubleshoot performance issues with them.

I think there’s some truth to that, and some necessary context as well.  Ideally, yes, I would be an ORM expert on whatever 1 or 2 specific frameworks my colleagues are using (Entity Framework, most likely), and any time there was a potential performance challenge with a app-to-database call, I’d be able to parachute-in and sprinkle some magic dust and make it all better.  But I’m also the one DBA (out of approx. 1.3 total), serving 4 teams of 3-6 devs each, so in the immortal words of meme-dom:

Ain’t nobody got time for that!

nobody-got-time-old-fashioned
because sometimes old-fashioned things are funny too…
Now I’m not making excuses.  All I’m saying is, the burden of understanding is on more than just one team member or job-role.  If your dev team is adapting an ORM, said devs need to learn how it works too — at least enough to help with basic performance troubleshooting.  Even if it’s just the ability to extract, from a debug session, the actual T-SQL code that’s being sent to the server, and give me a sample query to analyze for performance bottlenecks.

Let’s step back a bit.  It’s all about using the right tool for the job, yes?  ORMs are meant for basic CRuD operations and simple data access patterns, right?  So why try to build complex business logic into them?  Because, like it not, teams do build complex business logic into the data layer — despite our protests and soapbox sermons to not do it.  And because the vast majority of applications we’re dealing with are not greenfield.  Furthermore, ORMs tend to work best when the data model is well-defined, or the database is modeled well (well-modeled?).  And again, we don’t all get to work with unicorns in utopia.

Put it this way: If you want an efficient, performant module of data-layer business-logic against your SQL database, it’s likely going to be a stored procedure carefully crafted by a DBA/DBDev.  Could you achieve the same results from the app layer, using Linq and/or some mix of ORM and code?  Probably.  Do you have the time and patience to do so?  Maybe not.

i-have-no-strong-feelings-neutral
If I don’t survive this… tell my wife, “hello”.
So once again, I’m Switzerland.  Well, preferably a more pragmatic version — what country would that be?  Norway?  Anyway.  Use the methodology that’s the best compromise between “right tool for the job”, “optimized developer productivity”, and “easiest to troubleshoot”.  It’s a tough call, but that’s why we get paid.

Until next time!

SQL Server for the Developer

And now for a brief interlude while I work on my next “real” post… continuing with the same theme… SQL Server for the developer/programmer!  Warning: potentially excessive snark ahead.

caution-sarcasm-ahead

SQL Server is a mystical fairy-tale land where you can store all your application data, and never have to worry about it again!  Sure, it “should” be relational, but you also “should” drive the speed limit.. PSHH!  Let’s throw blobs in there too — HTML, XML, hell, even JSON now that 2016’s got nice happy JSON-centric system functions.  And files?  Heck yes, what else is FILESTREAM for?  Sure, performance may be abysmal, but that’s somebody else’s problem, so why should we care?

It’s the only database platform your company invested in, so it must be good for everything.  Document storage?  Sure, not like there’s anything else better suited for the job, especially for ‘Free’!  Ooh, how about key-value data, and volatile temporal data like .NET Session-State?  Obviously!  Nothing else exists, and Microsoft makes it so simple to set it up in SQL.

sarcasm-sign-everybody-needs-one
Here’s your sign…
It’s that wonderful database system where we send all our Entity Framework queries and data just magically appears!  We don’t need to worry about how nasty and convoluted those queries turn out when they finally get parsed down to actual TSQL code.  It was only 3 nice simple lines of LINQ, why should it be any more complex than that?  Oh, and we can write all sorts of abhorrent ad-hoc queries against it too, nobody will even notice.  Let that code build all sorts of IN SELECTs & sub-queries, chains of unreadable JOINs, and Inception-level nested VIEWs, it’s perfectly fine!  Performance, shmerformance!  That’s what we have DBAs for.  Not that they’ll be able to help you when you refuse to refactor or rewrite your code that’s responsible for these abominable queries.

what has been seen cannot be unseen
Your terrible queries are scaring the dog…
SQL Server is so easy!  Microsoft gives me a free Developer Edition, I can code for all these awesome features that I know we’ll have in production, right?  Oh, we only run Standard Edition?  Woops.  Ooh, I know!  Let’s put everything in Azure.  It can do everything and make us breakfast!  Then we won’t need to worry about feature differences, backups, RTO/RPOs, index maintenance, performance tuning, scaling, or even performance in general!  Wait, no?  Oh well, that’s what we have DBAs for!  Besides, it’s so easy to move everything over, we just take our SQL backups and upload them to blob storage and then restore them on Azure SQL Database, right?  No?

this is why we can't have nice things
Exactly.
Hmm.  Maybe we should hire a DBA.  Or four.

That’s all, see ya next time!  XD

DEV, the SQL

See what I did there?

I have a confession.  I geek out over building & shopping for computers.  There’s barely any money in it, otherwise I’d do it for a living.  But those jobs belong to robots and overseas underpaid factory minions, and that’s probably for the best (at least, the first part).

But yeah, I’m a geek.  I’ll pour over articles and reviews, pick out the components, fill and empty my Amazon or Newegg cart, go back and read more, pick some other components… I love it.  It’s ridiculous.

This happens to be in direct conflict with one of my other passions, frugality.  Yes, I’m cheap.  Thus, usually I end up geeking-out helping other people with computer builds or replacements or whatnot.

So when my boss decided one day, “Hey, let’s go down to Micro Center and see if we can put together a replacement for the SQL Server Development box”, aka ‘SQLDEV‘, I was more than happy to oblige.  Because, you see, he loves computer shopping almost as much as me.  And he also happens to have a fancy corporate credit card.  *cha-ching!*

The current server “feels slow” — it’s a Dell Poweredge R710 with 24GB RAM, 2×2 (cores x hyperthreading) 3.6 GHz CPU, and an attached storage array of 7.2k SAS SATA II (3 Gbps) drives.  Shipped mid-2011 — it’s not that bad in the grand scheme of things, but it’s definitely outlived its life as a 3-instance SQL server with terabytes of data.

Here’s what we picked up at Micro Center:

Base system:

  • PowerSpec G403 Desktop – $950
    • Core i7-6700, 4×2 (cores x HT) @ 4.0GHz
    • 16GB DDR4-3200 RAM
    • 512GB SATA 6Gb/s SSD (Sandisk SD8SB8U512G1122)
    • GB LAN, DVD drive, integrated graphics
  • the one big problem I have with this PC is that it only has 4 SATA ports; more on that in a minute..

Add-ons:

  • 32GB DDR4-3200 RAM (for a total 48GB) – $180
  • 3x 2TB Samsung EVO SATA 6Gb/s SSD – $630 x3 = $1890
  • 500GB Samsung EVO M.2 SSD – $180
    • (for TempDB – supposedly even faster than SATA)
  • 5TB Toshiba 7.2k SATA III HDD – $145
    • (for pulling down backups & shuffling files between Production & Dev boxes)

Sub-total: $3345
Total with tax: $3612.60


For those of you keeping score at home, that’s 6.5TB of dedicated SQL space, plus another half TB for boot/OS, plus another 5TB for “slow” storage.

Now, if we start poking around Amazon for used servers in the same class as the R710, we do find some pretty compelling — and much cheaper! — systems.  So did we just spend that money for no good reason?  Well, cosmetic arguments aside, I still say, emphatically, NO.  Here’s why.

  1. This is dev. We don’t need redundancy, HA or DR.  We need one thing: speed.  Well, really two things: space and speed.  And sure, 2TB SSDs aren’t cheap.  But have you ever looked at 3 or 4 TB SSDs?  Holy bejeezus.  What about “more is less” — why not six 1TB SSDs?  Okay; can you find a desktop class motherboard with enough SATA ports?  Sure most of the good ones have 6, but that’s not all we need — we still need space for the OS/boot drive and the backup mechanical drive.  Sure, we can drop in a PCIe-SATA card and get 2-4 more free ports that way.  In fact, I already have to do that because this mobo skimped on ports!  But either way, as it turns out, most of our DB filegroups are near or over 1TB.  And again, without RAID, I’m looking at possibly sharding out data files across 2 drives per SQL instance, which A) doesn’t mimic production (although that’s a pretty weak argument at this point), and B) sounds like more of a headache than I care to deal with over saving a couple hundred bucks.
  2. Peace & quiet.  Servers are loud, power-hogging, heat-generating beasts.  A desktop PC is none of those things.  It’s sitting under my desk right now and I don’t even notice it.  Plus, it’s really easy to set up, tear down, move somewhere else, and set up again.
  3. Did I mention speed?  This thing is blazing fast.  CrystalDiskMark pics to follow.  But again, there’s no redundancy here, no warranties or service agreements to cover these parts or this system in the event of a bad component or data-loss incident.  That’s why you don’t run production (or even QA/UAT) on these types of machines — because parts do break, and when they do, you need redundancy and HA/DR built-in to your important systems.  On Dev, though, we can rebuild it in a day or less.  No problem!

Benchmarks: Boot, Temp, Data

So that’s where my geek flag has been flying this week.  My sales pitch to the boss was based on a post from Glenn Berry (http://www.sqlskills.com/blogs/glenn/building-a-desktop-workstation-for-sql-server-development-and-testing/), so if anybody’s to blame for feeding into the crazy, it’s probably him.  I’m sure he won’t mind!

Like any other business, our IT infrastructure has been slowly getting consolidated, converged, virtualized, and even moved to the cloud in some cases.  So granted, this is a short-term solution.  And because of said consolidations, we definitely do have decommissioned servers that may make a good home for “real” Dev/Test/QA environments very soon.  They’ll be expertly planned, well-managed, viable branches of the infrastructure… when the IT team has time to dedicate to those efforts.  But sometimes, there’s nothing wrong with a little good-old-fashioned geekdom, and having a little side-project that makes you gush like a schoolgirl & keeps you honest.

PS: Yes, the elephant in the room: the cloud option. I get it. Once again, the boss wanted to do it a certain way, and in this case, the cloud was not on the short-list. Next time, buddies!

Header image: our husky Keira at 2 months. No, that’s not a turd next to her mouth, it’s a treat.