Finding Trigger Dependencies

Preamble

In SQL, TRIGGERs are often a subject of contention.  Some data professionals have strong opinions about them, especially on the “they’re bad!” side.  But like anything, they’re a specific kind of tool that has a place in a specific problem.  One of the common scenarios for their usage is what we call an “Audit trail” — a running log of any & all changes made to a table or data-set (or even a database schema, as RedGate DLM Dashboard employs).

Yes, there are newer product features built around this very requirement, such as Change Data Capture (CDC) and Temporal Tables, but they require particular versions and editions of the product, and not every environment is going to be there.  So we need something native and compatible to the lowest common/supportable denominator.

Like most areas of the database environment, triggers are often poorly documented.  So what happens when we need to review all the triggers in a database and show what tables they’re on & what columns they’re “tracking” (sticking with the audit-trail example)?  Or, more generally, you could say, we want to find all table & column dependencies of our triggers.  Well sure, we could use the SSMS Object Explorer GUI — find the table, the triggers, right-click and “Show Dependencies”.. but that’s not good enough, for two reasons: A) it doesn’t get down to the column level, and B) it’s not script-able.

System catalogs and DMVs to the rescue!

dmv-line-of-people
No, not THAT DMV…

Disclaimer: this is heavy on MS Docs links.  There was a StackOverflow Q/A that helped point me in that direction, but sadly I lost the tab and didn’t save the link.

The Goals

In this script, I mostly focus on the UPDATE triggers, because they’re the most heavy-handed in terms of auditing.  Whereas with a typical INSERT or DELETE, the triggered audit action is basically “Hey, a row was inserted/deleted, here’s its primary key” — and sure you can get fancier and log what all the values were in the entire row — those are fairly straightforward.  It’s basically “all or nothing”.  In contrast, with an UPDATE audit, we want to know just what was updated, and from what (old value) to what (new value).

So the goal of documenting the dependencies is to see which columns we care about — because, inevitably, there are some columns that we either “don’t care about” for auditing, or some that just plain don’t change (by their nature — they’re an identity or a primary key, or controlled via other means).  In turn, this helps us understand how to reproduce these audit mechanisms when a better technology comes along — such as CDC or temporal tables, as alluded to earlier.

cdc-centers-for-disease-control
No, not THAT CDC… yeesh.

The Walkthru

We start with the system catalog views sys.tables and sys.triggers.  This establishes our primary relationship — a trigger ‘lives’ on (is attached to) a single table.  Then we immediately step into murky water.  A trigger, like other programmable objects, has a definition script, which you can view in a myriad of ways — Object Explorer, sys.sp_helptext, sys.sql_modules, sys.syscomments (deprecated), or OBJECT_DEFINITION(OBJECT_ID).  So if we really wanted to go spelunking, we could just read the trigger’s definition and see what columns it deals with.  But after about 2 or 3 of those, our eyes are going to glaze over and our fingers will start to twitch.  We need something better, something that can handle hundreds of triggers and output a nice unified result!

Oh wait, what’s this?  Object Explorer has a “View Dependencies” function.. aw, bummer, it doesn’t show columns.  Just tables.  As we know, most of the GUI functionality is driven by existing system meta-data/catalogs/dmv’s, so there’s definitely something that keeps track of these dependencies.  Let’s hazard a guess… sys.sp_depends or sys.sql_expression_dependencies?  The former is deprecated, the latter doesn’t give us a way to get to columns either.  But thankfully, on the docs, the deprecated note points us toward the correct path: sys.dm_sql_referenced_entities and sys.dm_sql_referencing_entities.

Now we’re cooking with gas!  So how do I use these suckers?  Well, there’s a couple intermediary joins I need to go thru, and in fact these are functions, not views, so we need to know the proper arguments.  But fear not, I have done the hard work for you!  Check out the code and let me know what you think.

But wait, there’s more!  There’s a slight monkey-wrench here.  You see, in the column dependencies, we’re getting both the “audited” columns and the “key” columns.  But we’d like those to be distinguished from each other.  When we document and/or replicate the auditing using newer technology, the primary keys aren’t what we need to track (they’re in a sense “given”, known) — we need to know which columns are “audit-able” and which aren’t.  Hence the two queries with a UNION ALL.

union-vs-confederate
That is ALSO inappropriate. Good lord, who’s choosing these pictures today?

Conclusion

So there you have it, documenting some audit-trail triggers.  Or triggers in general.  Complete with the tables & columns on which they depend.  Enjoy!  ‘Til next time folks.

Advertisements

VS SSRS Project Gotchas

I now present to you, a recent adventure in building & configuring a SSRS 2016 / VS2015 project to replace our aging SSRS 2008R2 / VS2008 setup.  In trying to make things ‘better’, I wanted to centralize the storage of the Data Sources.  I found a tip on StackOverflow with a neat (albeit hacky) idea on how to do this.  So I started diving into it.  Here’s how that went.

The Setup

I have a bunch of published reports on a ‘real’ report server (SSRS instance), which were created off-the-cuff using Report Builder 3.0 (no source-control connection or anything).  So the first thing I had to do was go and download each RDL (thanks, Microsoft… ugh!), to my local VS2015 project folder, so I could bring them into my SSRS project.  I didn’t bother trying to download/copy the Data Sources (they’re basically just connection-strings, and anyway you can’t download them from Report Manager), so I settled for re-creating those in VS.

Then it was time to set up the solution & projects.  I prefer to organize my reports in context-relevant folders, like “Operations”, “Marketing”, etc.  In order to do this sensibly, within the VS solution, you need to create a Project for each major fold you’ll have.  Makes sense; this is how the old solution was constructed too.  Here’s where I tried to improve things:  I created a “Datasources” project, which would house just the shared Data Sources.  Here’s where I created my new data-sources, setting them up with saved SQL auth (login & pwd).  Sure, maybe not the “best practice” but it keeps things simple — my SSRS user only ever has read-only db access, and the pwd is managed well enough to keep auditors happy.

Thus, my plan (in following the SO tip) was to have all the other projects’ Data Sources be pointers to these shared Data Source files (RDS​’s).  Sounds good in theory, right?

what could possibly go wrong
ooh, sparkly!

Starting to Implement

Well.  I started small with just one project, one report, and one data source.  I went into my Nate test project, did “add existing item” into the Reports folder, and browsed to pick the rdl that I had just downloaded from my live SSRS server.  I then went to my Datasources project, copied the rds that I needed, pasted it into Nate test project’s Shared Data Sources folder.

Now at this point there are 2 copies of this file in my solution folder.  We don’t want that.  So I opened up trusty ol’ Notepad++ on the Nate test.rptproj file, and edited the XML node under <DataSources><ProjectItem><Name>My-Source.rds</Name><FullPath>My-Source.rds</FullPath></ProjectItem></DataSources> , changing the FullPath node value to <FullPath>..\Datasources\My-Source.rds</FullPath>.  I then deleted the physical copy of the file at \MySolution\Nate test\My-Source.rds , so that only the shared one physically remains in the filesystem (\MySolution\Datasources\My-Source.rds).

Another way to accomplish this, you may read, is to right-click Shared Data Sources within your project, e.g. Nate test, and say “Add existing item..”, and go select the global shared RDL from your master Datasources project-folder.  However, this still results in a physical copy of the file within Nate test folder!  So in either case you’ll end up doing some editing & deletion.

With me so far?  Okay, here’s where it gets interesting.  I could successfully build and deploy this report to my SSRS server, verifying that it showed my “new thing” (I added a text-box that simply said “This is the new hotness!”).  But I could not preview the report in VS!  Sad panda.  And of course, the most vague generic error message in the world:

An error occurred during local report processing
An error occurred during report processing

sarcasm-smiley
How specific and informative!

More Googling led to a somewhat random tip in the vein of “Go to the report properties, Open the report’s Data Source, and re-select the Shared Data Source from the dropdown”.  Combine with “Edit the Shared Data Source, ensure your saved login/pwd is filled in, and Save it”.  Presto!  Previewing now worked.

But why is this?  Well, as it turns out, it’s not all that surprising.  The RDLs are stored without the saved login/pwd, because of course they’re just plain-text XML files.  So to “protect you”, MSFT doesn’t store your SQL auth credentials, even if (and now matter how many times) you click “Save my password” on the connection-properties window (connection-string builder).

Great.  Well, I did that once, for Nate test project.  Let’s see what happens with another project (aka report folder), say Test again.  So I add an existing RDL, I copy-paste the Data Source from the global shared Datasources project, manually edit the rptproj in Notepad++, delete the copied rdl, etc.  I then try to preview the report… and behold, the same maddeningly vague error message!

“Well duh!” you may think to yourself, “the credentials still aren’t stored anywhere un-encrypted!”.  Yes, dear read, you are correct.  But do I want to right-click on this project’s Data Sources, edit it, fill in the credentials, save, rinse, repeat, for every single project?!  A resounding NO!

nobody-got-time-old-fashioned
ANGTFT

So what should I do?  I could go type in the credentials to the actual connection-strings contained within the ‘master’ rds files.  Of course then they’re in plain-text for all to see… but wait, do I care?  Another no.  Let’s go edit some connection-strings!

As a reminder, here’s what they will look like:

<ConnectString>Data Source=MyServer;Initial Catalog=MyReportDB;User ID=MyReportUser;Password=ThisIs@nAw3s0meP@ssw0rd!;</ConnectString>

Why did MSFT choose to name the XML node ConnectString in their rds schema instead of ConnectionString?  Don’t ask me!

The Gotchas

Here’s the first gotcha:  My reports (RDLs) need to be re-pointed at the correct Shared Datasource, because currently, as they were downloaded from my ‘real’ SSRS server — which had my desired top-level folder layout of “Datasources”, “Nate test”, “Marketing”, etc. — their Data Sources xml node points at \Datasources\MyDatasource.rdl.  This was the correct path on the SSRS server and still will be the correct path when all is said & done.  BUT, to support local debugging/previewing, the build output has to make a copy of the rds in the Project’s bin\Debug folder, which it will happily do, and then just as happily proclaim Could not find a part of the path 'C:\Users\Nate\Documents\Visual Studio 2015\Projects\MyReportSolution\Nate test\bin\Debug\Datasources\MyDatasource.rdl'. — because DUH, the build system didn’t make you a ‘Datasources’ sub-folder under the Debug folder, why would it!?  So by either manually editing the rdl file to remove the ​​\Datasources\ path from the node, or by clicking on the Report’s Data Sources node in the Report Data pane and re-pointing it at the local Shard Data Source (which again, is itself a pointer to the globally shared Datasources!), you can fix this issue.

In the rdl XML, that looks like this (using strikethru to show what you remove):

<DataSources>
<DataSource Name=”MyReport_DataSource”>
<DataSourceReference>\Datasources\MyDatasource</DataSourceReference>
<rd:SecurityType>None</rd:SecurityType>
<rd:DataSourceID>gobbledegook-guid</rd:DataSourceID>
</DataSource>
</DataSources>

And there’s another gotcha:  After we do all this, if we open the rds from within VS solution-explorer, it still won’t “think” it saved the authentication!  But the connection-string will be clearly visible with the plain password.  Here’s a couple pictures of what that looks like.

shared-datasource-properties-ssrs-vs2015
Data Source properties
datasource-connection-properties-ssrs-vs2015
Connection properties of the Data Source

Oh, by the way, did I mention how many times VS crashed on me while doing these gymnastics?  At least a few.

Lessons Learned

So, what have we learned?  Well, for one, this is a crappy situation born of poor in-product support.  I should be able to configure Solution-level shared Data Sources, use them in as many Projects (within said Solution) as I want, and have VS configuration management support them; bonus points for doing so with saved & encrypted credentials.  Ideally, when we check this into source-control, we’d check in the “DEV” environment flavor connection-configs.  Then, when the reports get deployed to the “PROD” SSRS server, the same globally shared Data Sources are already present (and they don’t get over-written, thankfully by default!), configured by the DBA with prod credentials, and nobody in the development pipeline needs to know said credentials.  Yay?

But alas.  We live in an imperfect world.  I also live in a world where I’m both the Report Developer, the Report Manager, and the Production DBA.  So guess what I get to do?  That’s right, not care about the connection-string security!

Okay, yes, I care a little.  Depending on my mood.  But not enough to turn my whole dang world upside-down trying to design and implement a robust multi-tiered-environment solution for one measly SSRS repository.  Kudos to you if you do — you’re a better man than I.

Or “better woman”.  But I’m not a woman.  So I guess the correct phrasing would be “You’re a better woman than I am a man.”  No, that’s not right.  “You’re a better person than I am?”  Sure, I guess that works.

In Review

To recap:

  1. Create the solution & projects
  2. Create the global shared Datasources project, and create your Data Sources (RDS‘s)
  3. Import your Reports (RDL‘s) to each requisite report-project
  4. Manually edit your rptproj to point the data-sources at the central shared path
  5. Manually edit your RDL‘s to remove previously established paths to the data-sources
  6. Configure your rptproj‘s to output to the correct folders on the report server — this should be done by default, i.e. VS has the “intelligence” to guess these for you, but just double-check.
  7. Build, Preview, and Deploy!
  8. Commit it all to source-control and let the collaboration begin.

And now to go automate all this with PowerShell… right after this beer.

Anyway.  Until next time, folks!  If I’ve made any mistakes, missteps, or otherwise offending your technical sensibilities, and/or if you know of a better way to accomplish what I’m after, I’d love to hear from you in the comments!  =)

TSQL Tuesday 95: Big Data

This month’s party brought to you by Mr. Hammer (b|t).

mc-hammer
No, not THAT one…

I apologize in advance for all the hammertime memes.  It was just too good to pass up.  Surely he must be used to this.  Or at least not surprised by it.  =D

So, Big Data.  What is it?  Well, in simple terms, it’s the realization and acceptance of the fact that data is multi-model, multi-faceted, multi-sourced, and constantly growing.  It’s the fact that the traditional RDBMS is no longer the be-all end-all source of truth and valuable information.  It’s part of a larger ecosystem involving JSON document stores, CSV files, streaming volatile bits of data coming from random devices and user activity that loses its meaning and potential impact almost as quickly as it can be gathered and sifted and stored.

But what do we actually get out of it?  As a small-medium enterprise NOT in the software business, I have to say, not as much as the hype would have us believe.  And look, I’m not so jaded and crusty that I refuse to adapt new tech.  I Just haven’t seen a meaningful transformative business use-case for it.  Sure, we have Google Analytics telling us how our websites are doing, and someone in marketing knows something about trending our social media traffic.  Does it really help us make more money?  Heck if I know.

cease thy actions, my timepiece has indicated the necessity of mallets
Old-timey colonials can even dig it…

Here’s what I’d like to see from the thought leaders.  Give me something I can chew on — a real-world, non-hypothetical, non-frivolous, impactful use-case for adopting and implementing something like Hadoop/Spark or Azure Data Lake.  Show me how my business can realistically journey down the path of predictive analytics and what it’s going to take from our Devs, IT staff, and management to actually get there.

Because they don’t get it yet.  I have managers still worrying about how much we’re spending on a dinky little flash storage array to support the growing needs of our on-prem converged infrastructure stack.  Meanwhile the AWS bill continues to baffle, and Devs want to play with Docker and Lambda.  But we can’t seem to convince the higher-ups that they’re short-staffed on the internal-apps team, even after a minor version upgrade takes 4 hours of Ops time and half a dozen end-users doing post-mortem testing just to be sure we didn’t break anything unexpected.

I’m not here to complain.  Really.  I do want to see something amazing, something inspiring, something that shows me what Big Data truly brings to the table.  And sure, I’ve see the vendor demos; they’re all just a bit outlandish, no?  I mean, they look really cool, sure — who doesn’t want to see a chord diagram of who’s killed who is GoT? — but does that really help my business improve sales and productivity?

My point is, there’s a gap.  A chasm of misunderstanding and mis-matched expectations between what management thinks Big Data is/means, and what it takes to actually implement.  They see the pretty pictures and the fancy demos, but they don’t see the toil and sweat (or at least, in the cloud, gobs of cash) that go into building & operating the underpinnings and pipelines that drive those nice graphics.  Not to mention the fundamental issues of data quality and governance.

continue not, time for hammer it is
OK OK, last one, I swear…

So do us a favor, Big Data pundits.  Show us something real, something that “the little guy” can use to up his/her game in the market.  Something that makes a positive impact on small non-startup non-software businesses with understaffed IT & Dev teams.  But more importantly, stop glossing over the effort and resources that it takes to “do Big Data right“.  Managers and executives need to understand that it’s not magic.  And IT practitioners need to understand that it’s actually worth-while.  Because I believe you — really — that the payoff in the end is there, and is good.  But you need to convince the whole stack.


PS: I know this is a fully day late for T-SQL Tuesday, and as such, I wasn’t going to post a ping-back in the comments of the invite, but then I saw there were only 8 others, so I felt it would benefit the event if I did add my late contribution.  I’ll tweet with a modified hash-tag instead of the standard #tsql2sday, to reflect my late-ness.  Hopefully that’s a fair compromise to the community & the event’s intentions.  =)

SQL Server Performance Troubleshooting Free Scripts and Tools List

Originally posted on David Peter Hansen:
Back in the days, I used to collect a lot of different scripts, tools, and other goodies for troubleshooting SQL Server performance issues. These days, however, I tend to use what is publicly and freely available (as well as some internal stuff), and keep a list of those in my…

I don’t normally reblog.  But when I do, it’s something awesome. =D

David Peter Hansen

Back in the days, I used to collect a lot of different scripts, tools, and other goodies for troubleshooting SQL Server performance issues. These days, however, I tend to use what is publicly and freely available (as well as some internal stuff), and keep a list of those in my head.

I’ve meant to write that list down for a while, and today Chrissy asked:

So here it is…

Disclaimer: While I do work as a Premier Field Engineer for Microsoft, this is my list – this is not an official list from my employer. Others likely have a list that differs from mine.

Free scripts and tools from Microsoft
These are scripts and tools provided…

View original post 892 more words

Quickie: SSRS Multi-Value Parameter Defaults

a quick tip for passing default values to an SSRS multi-value parameter during report drill-thru.

Inspired by some StackOverflow-ing and this particular answer.

Aka: “How do I pass/assign a default set of values to a multi-value parameter in SSRS?”

We often have an SSRS report that needs to “drill thru” to another report, usually to go from a “high level view” to a “lower level” or more detailed view, or sometimes just to lead the user down the path we’ve chosen as the head analyst / BI architect.  And part of that report navigation involves initializing (set to defaults) the value(s) of the sub-report’s parameters, including multi-value parameters (which I’ll now refer to as mvp, even though, I know, it’s a ridiculously overused acronym).  These are basically arrays, but are often represented as simply comma-delimited strings, so it’s easy to forget their true nature.

beware the array in string's clothing
I see you there!

Let’s fabricate an example.  In our Sales Summary report, we have an element (a textbox, image, placeholder, or whatnot) that, when clicked, should drill-thru to Sales by Person.  Said next report requires a multi-select parameter (another term for the mvp) to have 1 or more values selected, otherwise it doesn’t render.  We’ll call this parameter SelectedNames, with value-label pairings 1=Bob, 2=Alice, 3=Mary.  When we drill-thru to this by-Person report, we want it to initially show (have selected by default) all available people.

So how do we do this?  In the properties of the “clickable” element on Sales Summary, say it’s a text-box named GoToDetails, we go to the Action tab.  We choose the “Go to report” action, select/specify the target report, Sales by Person, and then add the parameters we want to pass to it.  For comparison, I’m going to pass a “regular” (single value) parameter called @ReportDate, as well as the mvp SelectedNames.  Here’s what that all looks like, in picture form.

text box properties > actions > go to report > specify report > add parameters > enter expression
step by step

The single parameter pass-thru is, as you’d expect, very simple.  But for our mvp, we need to use the expression-builder, that little fx button stylized to look like your old high school math class days.  Here’s what that function looks like:

=Split("1,2,3", ",")

And presto!, we have converted a comma-delimited list into an array to pass into our Sales by Person report’s SelectedNames multi-value parameter.  Now it will initially render with all 3 selected people as desired.

So there you have it, a quick tip for passing default values to an SSRS multi-value parameter during report drill-thru.  But what if you wanted to pass the selected values of one mvp down to another?  Or do some clever on-the-fly mapping (conversion) from one to the next?  Well, stay tuned!  I’ll write about that next time.  =)

Thanks for reading!  For a lot more on SSRS and multi-value parameters, check out these articles: @sqlchick, @mssqltips, and @msdn.

you the real mvp meme
Well, not really, see, because MVP is now completely overused and diluted to the point that it’s nearly lost all meaning, so… but yeah, you’re cool. And stuff.

PowerShell and BITS

this is about using PowerShell and the BITS framework to copy very large files across servers…

Welcome back!  This month’s topic is PowerShell — thanks to one of our prominently bearded community members.  PowerShell is a fantastic tool in the IT professional’s toolbelt.  I was first introduced to it somewhere in 2014 or 2015 by a colleague, and started making much heavier use of it when my career took me to a new & bigger environment.

Actually, funny side-story.  I remember seeing one of the very early incarnations of PowerShell, or what would eventually evolve into it, in college.  A graphics programming course, of all things, had a MS partner come in to show us this “cool new” Windows command-shell thing (different and separate from the DOS-style CMD, obviously), where he demonstrated fetching some data from the filesystem, feeding it into a CSV, and doing some kind of super-basic analysis on it to show in a “report” (which was really just another text file).  This was 2005-2006, so I couldn’t say what it was specifically, though I seem to remember something about the word “Longhorn”.  Although, reading up on some of the Wiki-history, it seems more likely that it was a Monad beta.

that is so four score and seven years ago
Don’t ask me why hes wearing horn-rims. I don’t know.

Preamble

Anyway, back on topic.  Today’s post is pretty simplistic in comparison to what most people may be writing about.  But I’ve already blogged about doing hands-off SQL installation with PowerShell & CLI, and this was another thing kicking-around the back of my mind.  So this is about using PowerShell and the BITS framework (*-BitsTransfer cmdlets) to copy very large files across servers.  Specifically, database backups.  Because let’s face it, they can be really large.  And if you’re faced with fetching them off a PROD box, you want to minimize the impact on that box’s resources.

Now sure, there are other ways – xcopy or robocopy with the /J flag (un-buffered IO), or fancy GUI tools.  And in an ideal world your backups would be written to a network share that’s not a local drive on the PROD SQL server, right?  Right…

Oh, and one more thing.  You need to enable BITS via the Windows Features console — search “features” in your Start menu and it should come up as Turn Windows features on or off (Control Panel) .  On a server, it’s under the Server Role “Web Server (IIS)”, feature “Background Intelligent Transfer Service (BITS)”.  Underneath there are 2 sub-feature choices, “IIS Server Extension” and “Compact Server”.  Honestly I don’t know which is preferable, but I left it with the default selection, the first (former).  It should go without saying, but don’t do this in production (unless you have the blessing of your SysAdmins).

But Why?

Why BITS?  Well, as per the Docs, it has the following 3 key features (emphasis mine):

  • Asynchronously transfer files in the foreground or background.
  • Preserve the responsiveness of other network applications.
  • Automatically resume file transfers after network disconnects and computer restarts.

Wow, nifty!  So it doesn’t hog the network, and it’s resumable (resume-able?) in case of connectivity hiccups.  Pretty sweet, no?  Also, it can run asynchronously in the background, which means it won’t hog your storage bandwidth or compute resources.

async all the things!
Because we can.

Let’s See an Example

Most of the guts and inspiration for this came from this article over on “Windows OS Hub” (woshub, a somewhat unfortunate sounding acronym, but certainly not as bad as some!).  The datePattern nonsense is just to make it “dynamic” in the sense that, if you have a backup scheme like me, with Sunday FULLs, daily DIFFs, and obviously TLogs in some every-X-minutes fashion, you’ll usually want the latest set of FULLs and DIFFs.  But you could easily tweak this, make it more point-in-time aware or whatever, as needed.

So, here’s a bit of a talk-thru outline, and then I’ll just link the Gist.

  1. Get the list of files we want to copy, from “source”
  2. For each file:
    1. Make sure it doesn’t exist in the “destination”
    2. If not, start a BITS transfer job (saving said job to a variable for checking/finishing later)
    3. While said BITS job is pending, print a progress message and sleep for some seconds
    4. Finish (“complete”) said job and move on to the next file
  3. Conclude with a message about how much work we just did!
  4. Repeat steps 1-3 for another “set of files” (list) if desired

And without further ado, the code.

The Catch

There are some downsides here.  First, you cannot use BITS in a non-interactive mode, i.e. inside a Scheduled Task as a User that’s not logged-in.  This is because it’s a “desktop”-oriented feature, not a “server” one.  Second, I’ve never been able to get multiple transfers going at once — or at least, multiple PoSh scripts which use BITS transfers.  This could very well be my fault, but it does seem like the BITS jobs are “serial” in nature, i.e. one must finish before the next one can start.  Again, not the expert, just observing what I found during my experiments.

parallel vs serial ports on old computer
Obviously, serial won out in the end (specifically, his superstar protege, USB), but you gotta hand it to parallel, he had a good run.

Conclusion

BITS transfer is an interesting method for copying extra-large files around your environment with low overhead.  PowerShell makes it easily accessible and lets you wrap it up in loops and checks so you can effectively build a progress-indicative, predictable and reproducible method for copying a whole SQL server’s set of backups from one place to another.

What cool little things have you discovered using PowerShell?  Let me know!  Thanks for reading.

Quickie: TempDB on local SSD

What could possibly go wrong? As it turns out, plenty.

Faithful reader(s), it’s been a while!  I’ve been busy preparing for some big transitions.  I’m also getting better at MDX queries, tweaking SSAS-based reports to more accurately reflect the business rules.  But enough about that, on with the post!

In which we doubt the SAN

A storage area network (SAN) is a management & administration solution, not a performance solution.

-someone wiser than me

SANs are wonderful technology.  They inspire all kinds of geekery and are purported to solve all your storage woes.  But there’s a catch: they’re expensive.  Not just as a capital expense, but in maintenance and licensing costs.  And if you ever want to upgrade it, like add some more drives to a particular tier/pool — fuhgeddaboudit.

So what do we do with SQL on a SAN?  Well, it has tiers, right?  Slower storage with huge capacity, faster storage with less, etc.  We put the data files (heavy random read workload, typically) on the pool optimized for that kind of I/O pattern.  We put the TLog files (heavy sequential write workload) on the pool best suited for that.  And what about good ol’ TempDB?  Its access pattern is fairly unique — random writes and reads, and frequent overwrites, which means it could potentially wear out your typical prosumer SSD relatively quickly.  But we’re not complete cheapskates, we’ll buy enterprise class SSDs, no?

So we go read some stuff and figure, hey, sounds like a great idea, right?  Put TempDB on a local SSD, or better yet, a pair of SSDs in RAID-0 for pure performance (because this is a cluster, we’ve got HA already).  We’ll reduce the load on the SAN I/O channels and make our overworked TempDB happier with lower latency and better throughput.  Right?

what could possibly go wrong
ooh, sparkly!

 

In which we discover what could possibly go wrong.

Once the new drive(s) is(are) installed and “presented” to Windows (that’s my SysAdmin’s term), it’s fairly trivial to do the SQL configuration change — it does of course require a SQL service restart (or cluster failover).  Code example, assuming your new drive is ‘T’:

use master;
alter database tempdb
modify file (name=tempdev, filename='T:\tempdb.mdf')
alter database tempdb
modify file (name=tempdb2, filename='T:\tempdb2.ndf')
--etc...

You do of course have multiple TempDB data files, yes?  Good.

Side-debate

Should we put templog (TempDB’s transaction log) on the same drive as the TempDB data files, or put it on the same storage pool as the regular DBs’ TLogs?  As usual, “it depends” — ask your favorite SQL gurus and do some testing.

Back on topic

We’ve made the change, we’ve done the cluster failover.  TempDB is now running on our spankin’ new SSD.  So we start monitoring performance metrics.  Things like file I/O stats (from SQL DMV sys.dm_io_virtual_file_stats), latency and waits (from our monitoring tools), and good ol’ PerfMon.

But wait, what’s this?  I/O stalls are higher?  Write latency is higher?!?  Perfmon agrees?

nooooooooooooo-vader
why god why?!?

Write latency on the TempDB files was over 10x higher than it was when they were on the SAN (the performance tier, to be clear).  The file_stats DMV showed large increases in I/O stalls.  Sad-trombone.

 

In which we have several theories

Then ensued various conversations and brainstorms among my colleagues.

Someone check the firmware/drivers!

It’s up-to-date.

Maybe it’s got the wrong block-size.

Nope, 64k.

Well, it’s only 6Gbps SAS… maybe we should’ve sprung for the 12Gbps.

The write latencies went up by a factor of 10.  I don’t think an improvement by a factor of 2 is going to win you any trophies.

Why didn’t we get an NVMe or M.2 one?

Because the damn blades don’t have those slots, goober.

Another interesting observation, and potentially the silver lining.  Overall instance waits (wait stats), according to our monitoring tool, went down.  That’s good news, right?  Maybe.  Does application performance & user experience corroborate it?  Possibly!  We’ll be observing the patient for another week or so.

Let’s turn to the community again to see what others have experience.

In which we eat some crow

And by “we” I mean “me”.  Being the DBA and the primary proponent of the SSD addition, because I knew our workloads were very TempDB-heavy, I had to hang-tail and admit that the SAN gods won this round.

Maybe.

But wait, what about the fact that our wait stats are down?  What about app/user experience?  Valid arguments, I agree.  That’s why we’re still observing.  But I’m not optimistic, given the follow-up links above.  We may utilize local SSDs for something else (index filegroups?) — but if those write latencies don’t improve, I’m concerned that it won’t help anybody.

keep calm because only time will tell
And beer. Lots of beer.

In which I ask for your help

Yes, you!  If you have ideas on what we did wrong, what we’re missing, or any other advice about getting the most “bang for the buck” out of a direct attached SSD on a converged-infrastructure Cisco UCS blade server platform with a VNX SAN, by all means, drop me a line.  I’m all ears.