Because I’m already ridiculously late, I have a short one. This is about orphaned users — you know, when you restore a database and its users aren’t mapped to the server logins that they should be or used to be.
The typical solution is sp_change_users_login with the auto_fix or update_one option. But guess what? Yep, that’s deprecated. By the way, did you know that it also has a report option? Apparently that’s got some bugs…ish? Try it sometime and see — compare the output of sys.sp_helpuser where the ‘LoginName’ column is null, with sp_change_users_login 'report'. Preferably on a DB you’ve restored from another server. 😉
So what’s the correct solution? ALTER USER [theUser] WITH LOGIN = [theLogin]. Simple, no? Let’s get more general. Could we come up with a half-decent way do apply this kind of fix dynamically? Well sure, the nice folks at DBATools have already solved that problem. And that’s great, really. But just in case that doesn’t work… ^_^
One of the many things I love about SQL Prompt is the right-click option to “Script as INSERT” (from the results grid). This is a quick & easy way to built a temp-table for the results of an exec statement so you can do the ol’ insert #tmp exec sys.sp_blah ! Then we can query the list of DB users for the null LoginNames and write a little set of queries to fix them! Sound good?
PS: Coincidentally, today’s (Thursday) SQL Server Central newsletter featured a very similar post by a gentleman over at Madeira Data. Go check it out, it’s another great solution to this problem! And while you’re at it, get the SQL Server Radio podcast (created by a couple guys from the same company) – it’s a terrific addition to your iTunes library.
Renaming a Database looks easy, but it’s really not. Unless nobody’s touching it. Let’s find out why!
Databases sometimes need to get a new name. Or you need to swap names. It can be a royal PITA if the DB is a mission-critical always-being-used everybody-and-their-mother-touches-it-all-the-time thing.
Errors like “The database must be in a state in which a CHECKPOINT can happen” or “Cannot obtain exclusive access to the database” or “This transaction was chosen as the deadlock victim” abound. Sometimes it’s better to fall back on our old friends, detach & attach.
Let’s paint a picture!
Our ERP database has been chosen by the IT gods to get moved to the shiny new flash storage array, off the old spinning-rust SAN. This is fantastic news for the business users. But lo, the executives warn us, “You must do this with no downtime!” (said in my best Brent Ozar PHB-imitation voice). Of course when we tell them that’s impossible, they say, “OK, you must do this with minimal downtime.” That’s mo’ betta’.
So what are our typical options for doing a database migration? Or, more specifically, a data file migration. See, we’re not moving to a new server, and we’re not moving a bunch of databases together; we’re just moving this one ERP database. And we’re keeping it on the same SQL instance, we’re just swapping the storage underneath. Now yes, in a perfect world, both storage arrays (both SANs) would inter-operate and they’d be able to take a snapshot, pass it over, spin it up, swap it out, and our SQL instance would be none-the-wiser. But alas.
Other options? Log-shipping, sure; it’s been around forever, it’s very mature, simple, and relatively easy to operate, even if it lacks a bit in terms of automation capability. But let’s face it, the DBA is going to be sitting in front of his/her screen babysitting this operation no matter what, so that’s no biggie. Mirroring? Meh. Expensive Edition’s still got all the hot action there, and we’re not that fancy. Availability Groups? Sure, if we already had them set up & running. But we don’t. Sad-trombone.
Back to Basics
So we circle back to the classics, “backup & restore”, “detach-move-attach” routines. But there’s a hidden gem of a hybrid here: We can backup, restore as a new name, then switch the names! The catch is, we need to put the original DB in read_only mode while we do this, to prevent new/updated data from coming in. Now, when we ask the Dev-managers if their overlaying apps/websites can handle the DB being in read-only mode for an hour, they’re much more inclined to say OK to that, as opposed to the DB being actually ‘down down’.
Now, there’s a way to do this in T-SQL, and there’s a way to do this in PowerShell. Both are fine! I’m not arguing for one or the other — pick a flavor and go with it. This happened to be a day where I was feeling SQL-ish, not PoSh, hence the flavor I chose.
In my example we have our “slow” storage drives ‘D’ and ‘L’ (for Data and Logs, respectively). We’re moving our blessed database to “fast” flash storage drives ‘E’ and ‘M’ (you know, because they come after ‘D’ and ‘L’). The team managing the dependent apps have given their blessing to have a ‘read-only mode’ maintenance window for the database, as long as the actual ‘downtime’ is mere seconds.
Once the DB is in READ_ONLY, we do a backup,preferably to the flash storage so that it’s faster (and the restore, from flash to flash, will be super-fast!). We then restore that backup to a new DB on the flash storage drives, and, as expected, it’s speedy-gonzales.
Once our two copies of the database are up, we’re ready to take that minuscule down-time. We kick the users out of it (with SINGLE_USER mode), detach both DBs, and re-attach them (create database .. on (file), (file) .. for attach) with different names! Presto-change-oh, the ‘original’ DB is now resident on flash storage, and the old slow files are now a DB called “whatever the heck you want it to be, because it doesn’t matter anymore!”.
Disclaimer: I in no way support nor condone the naming of databases with spaces or special characters. If you actually name a database something like that, no matter how temporarily, I will find you, and throw sporks at your keyboard.
Assuming you didn’t run into any crazy locking problems or sticky-sessions that denied your attempt to force SINGLE_USER mode or detach the DBs, you should be good to go! I even considered building all this into an Agent Job, but… I’m worried it would get tripped up and leave the DBs in a half-arsed state that wouldn’t make our Dev-managers very happy. But hey, that’s what we have test environments for, right?!?
In SQL, TRIGGERs are often a subject of contention. Some data professionals have strong opinions about them, especially on the “they’re bad!” side. But like anything, they’re a specific kind of tool that has a place in a specific problem. One of the common scenarios for their usage is what we call an “Audit trail” — a running log of any & all changes made to a table or data-set (or even a database schema, as RedGate DLM Dashboard employs).
Yes, there are newer product features built around this very requirement, such as Change Data Capture (CDC) and Temporal Tables, but they require particular versions and editions of the product, and not every environment is going to be there. So we need something native and compatible to the lowest common/supportable denominator.
Like most areas of the database environment, triggers are often poorly documented. So what happens when we need to review all the triggers in a database and show what tables they’re on & what columns they’re “tracking” (sticking with the audit-trail example)? Or, more generally, you could say, we want to find all table & column dependencies of our triggers. Well sure, we could use the SSMS Object Explorer GUI — find the table, the triggers, right-click and “Show Dependencies”.. but that’s not good enough, for two reasons: A) it doesn’t get down to the column level, and B) it’s not script-able.
System catalogs and DMVs to the rescue!
Disclaimer: this is heavy on MS Docs links. There was a StackOverflow Q/A that helped point me in that direction, but sadly I lost the tab and didn’t save the link.
In this script, I mostly focus on the UPDATE triggers, because they’re the most heavy-handed in terms of auditing. Whereas with a typical INSERT or DELETE, the triggered audit action is basically “Hey, a row was inserted/deleted, here’s its primary key” — and sure you can get fancier and log what all the values were in the entire row — those are fairly straightforward. It’s basically “all or nothing”. In contrast, with an UPDATE audit, we want to know just what was updated, and from what (old value) to what (new value).
So the goal of documenting the dependencies is to see which columns we care about — because, inevitably, there are some columns that we either “don’t care about” for auditing, or some that just plain don’t change (by their nature — they’re an identity or a primary key, or controlled via other means). In turn, this helps us understand how to reproduce these audit mechanisms when a better technology comes along — such as CDC or temporal tables, as alluded to earlier.
We start with the system catalog views sys.tables and sys.triggers. This establishes our primary relationship — a trigger‘lives’ on (is attached to) a single table. Then we immediately step into murky water. A trigger, like other programmable objects, has a definition script, which you can view in a myriad of ways — Object Explorer, sys.sp_helptext, sys.sql_modules, sys.syscomments (deprecated), or OBJECT_DEFINITION(OBJECT_ID). So if we really wanted to go spelunking, we could just read the trigger’s definition and see what columns it deals with. But after about 2 or 3 of those, our eyes are going to glaze over and our fingers will start to twitch. We need something better, something that can handle hundreds of triggers and output a nice unified result!
Oh wait, what’s this? Object Explorer has a “View Dependencies” function.. aw, bummer, it doesn’t show columns. Just tables. As we know, most of the GUI functionality is driven by existing system meta-data/catalogs/dmv’s, so there’s definitely something that keeps track of these dependencies. Let’s hazard a guess… sys.sp_depends or sys.sql_expression_dependencies? The former is deprecated, the latter doesn’t give us a way to get to columns either. But thankfully, on the docs, the deprecated note points us toward the correct path: sys.dm_sql_referenced_entities and sys.dm_sql_referencing_entities.
Now we’re cooking with gas! So how do I use these suckers? Well, there’s a couple intermediary joins I need to go thru, and in fact these are functions, not views, so we need to know the proper arguments. But fear not, I have done the hard work for you! Check out the code and let me know what you think.
But wait, there’s more! There’s a slight monkey-wrench here. You see, in the column dependencies, we’re getting both the “audited” columns and the “key” columns. But we’d like those to be distinguished from each other. When we document and/or replicate the auditing using newer technology, the primary keys aren’t what we need to track (they’re in a sense “given”, known) — we need to know which columns are “audit-able” and which aren’t. Hence the two queries with a UNION ALL.
So there you have it, documenting some audit-trail triggers. Or triggers in general. Complete with the tables & columns on which they depend. Enjoy! ‘Til next time folks.
I now present to you, a recent adventure in building & configuring a SSRS 2016 / VS2015 project to replace our aging SSRS 2008R2 / VS2008 setup. In trying to make things ‘better’, I wanted to centralize the storage of the Data Sources. I found a tip on StackOverflow with a neat (albeit hacky) idea on how to do this. So I started diving into it. Here’s how that went.
I have a bunch of published reports on a ‘real’ report server (SSRS instance), which were created off-the-cuff using Report Builder 3.0 (no source-control connection or anything). So the first thing I had to do was go and download each RDL (thanks, Microsoft… ugh!), to my local VS2015 project folder, so I could bring them into my SSRS project. I didn’t bother trying to download/copy the Data Sources (they’re basically just connection-strings, and anyway you can’t download them from Report Manager), so I settled for re-creating those in VS.
Then it was time to set up the solution & projects. I prefer to organize my reports in context-relevant folders, like “Operations”, “Marketing”, etc. In order to do this sensibly, within the VS solution, you need to create a Project for each major fold you’ll have. Makes sense; this is how the old solution was constructed too. Here’s where I tried to improve things: I created a “Datasources” project, which would house just the shared Data Sources. Here’s where I created my new data-sources, setting them up with saved SQL auth (login & pwd). Sure, maybe not the “best practice” but it keeps things simple — my SSRS user only ever has read-only db access, and the pwd is managed well enough to keep auditors happy.
Thus, my plan (in following the SO tip) was to have all the other projects’ Data Sources be pointers to these shared Data Source files (RDS’s). Sounds good in theory, right?
Starting to Implement
Well. I started small with just one project, one report, and one data source. I went into my Nate test project, did “add existing item” into the Reports folder, and browsed to pick the rdl that I had just downloaded from my live SSRS server. I then went to my Datasources project, copied the rds that I needed, pasted it into Nate test project’s Shared Data Sources folder.
Now at this point there are 2 copies of this file in my solution folder. We don’t want that. So I opened up trusty ol’ Notepad++ on the Nate test.rptproj file, and edited the XML node under <DataSources><ProjectItem><Name>My-Source.rds</Name><FullPath>My-Source.rds</FullPath></ProjectItem></DataSources> , changing the FullPath node value to <FullPath>..\Datasources\My-Source.rds</FullPath>. I then deleted the physical copy of the file at \MySolution\Nate test\My-Source.rds , so that only the shared one physically remains in the filesystem (\MySolution\Datasources\My-Source.rds).
Another way to accomplish this, you may read, is to right-click Shared Data Sources within your project, e.g. Nate test, and say “Add existing item..”, and go select the global shared RDL from your master Datasources project-folder. However, this still results in a physical copy of the file within Nate test folder! So in either case you’ll end up doing some editing & deletion.
With me so far? Okay, here’s where it gets interesting. I could successfully build and deploy this report to my SSRS server, verifying that it showed my “new thing” (I added a text-box that simply said “This is the new hotness!”). But I could not preview the report in VS! Sad panda. And of course, the most vague generic error message in the world:
An error occurred during local report processing
An error occurred during report processing
More Googling led to a somewhat random tip in the vein of “Go to the report properties, Open the report’s Data Source, and re-select the Shared Data Source from the dropdown”. Combine with “Edit the Shared Data Source, ensure your saved login/pwd is filled in, and Save it”. Presto! Previewing now worked.
But why is this? Well, as it turns out, it’s not all that surprising. The RDLs are stored without the saved login/pwd, because of course they’re just plain-text XML files. So to “protect you”, MSFT doesn’t store your SQL auth credentials, even if (and now matter how many times) you click “Save my password” on the connection-properties window (connection-string builder).
Great. Well, I did that once, for Nate test project. Let’s see what happens with another project (aka report folder), say Test again. So I add an existing RDL, I copy-paste the Data Source from the global shared Datasources project, manually edit the rptproj in Notepad++, delete the copied rdl, etc. I then try to preview the report… and behold, the same maddeningly vague error message!
“Well duh!” you may think to yourself, “the credentials still aren’t stored anywhere un-encrypted!”. Yes, dear read, you are correct. But do I want to right-click on this project’s Data Sources, edit it, fill in the credentials, save, rinse, repeat, for every single project?! A resounding NO!
So what should I do? I could go type in the credentials to the actual connection-strings contained within the ‘master’ rds files. Of course then they’re in plain-text for all to see… but wait, do I care? Another no. Let’s go edit some connection-strings!
Why did MSFT choose to name the XML node ConnectString in their rds schema instead of ConnectionString? Don’t ask me!
Here’s the first gotcha: My reports (RDLs) need to be re-pointed at the correct Shared Datasource, because currently, as they were downloaded from my ‘real’ SSRS server — which had my desired top-level folder layout of “Datasources”, “Nate test”, “Marketing”, etc. — their Data Sources xml node points at \Datasources\MyDatasource.rdl. This was the correct path on the SSRS server and still will be the correct path when all is said & done. BUT, to support local debugging/previewing, the build output has to make a copy of the rds in the Project’s bin\Debug folder, which it will happily do, and then just as happily proclaim Could not find a part of the path 'C:\Users\Nate\Documents\Visual Studio 2015\Projects\MyReportSolution\Nate test\bin\Debug\Datasources\MyDatasource.rdl'. — because DUH, the build system didn’t make you a ‘Datasources’ sub-folder under the Debug folder, why would it!? So by either manually editing the rdl file to remove the \Datasources\ path from the node, or by clicking on the Report’s Data Sources node in the Report Data pane and re-pointing it at the local Shard Data Source (which again, is itself a pointer to the globally shared Datasources!), you can fix this issue.
In the rdl XML, that looks like this (using strikethru to show what you remove):
And there’s another gotcha: After we do all this, if we open the rds from within VS solution-explorer, it still won’t “think” it saved the authentication! But the connection-string will be clearly visible with the plain password. Here’s a couple pictures of what that looks like.
Oh, by the way, did I mention how many times VS crashed on me while doing these gymnastics? At least a few.
So, what have we learned? Well, for one, this is a crappy situation born of poor in-product support. I should be able to configure Solution-level shared Data Sources, use them in as many Projects (within said Solution) as I want, and have VS configuration management support them; bonus points for doing so with saved & encrypted credentials. Ideally, when we check this into source-control, we’d check in the “DEV” environment flavor connection-configs. Then, when the reports get deployed to the “PROD” SSRS server, the same globally shared Data Sources are already present (and they don’t get over-written, thankfully by default!), configured by the DBA with prod credentials, and nobody in the development pipeline needs to know said credentials. Yay?
But alas. We live in an imperfect world. I also live in a world where I’m both the Report Developer, the Report Manager, and the Production DBA. So guess what I get to do? That’s right, not care about the connection-string security!
Okay, yes, I care a little. Depending on my mood. But not enough to turn my whole dang world upside-down trying to design and implement a robust multi-tiered-environment solution for one measly SSRS repository. Kudos to you if you do — you’re a better man than I.
Or “better woman”. But I’m not a woman. So I guess the correct phrasing would be “You’re a better woman than I am a man.” No, that’s not right. “You’re a better person than I am?” Sure, I guess that works.
Create the solution & projects
Create the global shared Datasources project, and create your Data Sources (RDS‘s)
Import your Reports (RDL‘s) to each requisite report-project
Manually edit your rptproj to point the data-sources at the central shared path
Manually edit your RDL‘s to remove previously established paths to the data-sources
Configure your rptproj‘s to output to the correct folders on the report server — this should be done by default, i.e. VS has the “intelligence” to guess these for you, but just double-check.
And now to go automate all this with PowerShell… right after this beer.
Anyway. Until next time, folks! If I’ve made any mistakes, missteps, or otherwise offending your technical sensibilities, and/or if you know of a better way to accomplish what I’m after, I’d love to hear from you in the comments! =)
This month’s party brought to you by Mr. Hammer (b|t).
I apologize in advance for all the hammertime memes. It was just too good to pass up. Surely he must be used to this. Or at least not surprised by it. =D
So, Big Data. What is it? Well, in simple terms, it’s the realization and acceptance of the fact that data is multi-model, multi-faceted, multi-sourced, and constantly growing. It’s the fact that the traditional RDBMS is no longer the be-all end-all source of truth and valuable information. It’s part of a larger ecosystem involving JSON document stores, CSV files, streaming volatile bits of data coming from random devices and user activity that loses its meaning and potential impact almost as quickly as it can be gathered and sifted and stored.
But what do we actually get out of it? As a small-medium enterprise NOT in the software business, I have to say, not as much as the hype would have us believe. And look, I’m not so jaded and crusty that I refuse to adapt new tech. I Just haven’t seen a meaningful transformative business use-case for it. Sure, we have Google Analytics telling us how our websites are doing, and someone in marketing knows something about trending our social media traffic. Does it really help us make more money? Heck if I know.
Here’s what I’d like to see from the thought leaders. Give me something I can chew on — a real-world, non-hypothetical, non-frivolous, impactful use-case for adopting and implementing something like Hadoop/Spark or Azure Data Lake. Show me how my business can realistically journey down the path of predictive analytics and what it’s going to take from our Devs, IT staff, and management to actually get there.
Because they don’t get it yet. I have managers still worrying about how much we’re spending on a dinky little flash storage array to support the growing needs of our on-prem converged infrastructure stack. Meanwhile the AWS bill continues to baffle, and Devs want to play with Docker and Lambda. But we can’t seem to convince the higher-ups that they’re short-staffed on the internal-apps team, even after a minor version upgrade takes 4 hours of Ops time and half a dozen end-users doing post-mortem testing just to be sure we didn’t break anything unexpected.
I’m not here to complain. Really. I do want to see something amazing, something inspiring, something that shows me what Big Data truly brings to the table. And sure, I’ve see the vendor demos; they’re all just a bit outlandish, no? I mean, they look really cool, sure — who doesn’t want to see a chord diagram of who’s killed who is GoT? — but does that really help my business improve sales and productivity?
My point is, there’s a gap. A chasm of misunderstanding and mis-matched expectations between what management thinks Big Data is/means, and what it takes to actually implement. They see the pretty pictures and the fancy demos, but they don’t see the toil and sweat (or at least, in the cloud, gobs of cash) that go into building & operating the underpinnings and pipelines that drive those nice graphics. Not to mention the fundamental issues of data quality and governance.
So do us a favor, Big Data pundits. Show us something real, something that “the little guy” can use to up his/her game in the market. Something that makes a positive impact on small non-startup non-software businesses with understaffed IT & Dev teams. But more importantly, stop glossing over the effort and resources that it takes to “do Big Data right“. Managers and executives need to understand that it’s not magic. And IT practitioners need to understand that it’s actually worth-while. Because I believe you — really — that the payoff in the end is there, and is good. But you need to convince the whole stack.
PS: I know this is a fully day late for T-SQL Tuesday, and as such, I wasn’t going to post a ping-back in the comments of the invite, but then I saw there were only 8 others, so I felt it would benefit the event if I did add my late contribution. I’ll tweet with a modified hash-tag instead of the standard #tsql2sday, to reflect my late-ness. Hopefully that’s a fair compromise to the community & the event’s intentions. =)
Originally posted on David Peter Hansen: Back in the days, I used to collect a lot of different scripts, tools, and other goodies for troubleshooting SQL Server performance issues. These days, however, I tend to use what is publicly and freely available (as well as some internal stuff), and keep a list of those in my…
I don’t normally reblog. But when I do, it’s something awesome. =D
Back in the days, I used to collect a lot of different scripts, tools, and other goodies for troubleshooting SQL Server performance issues. These days, however, I tend to use what is publicly and freely available (as well as some internal stuff), and keep a list of those in my head.
I’ve meant to write that list down for a while, and today Chrissy asked:
Anyone have a list of apps, modules, tools, etc that Microsoft PFE's use? If not, can a PFE write a blog post? 😁 (Partic. interestd in SQL)
a quick tip for passing default values to an SSRS multi-value parameter during report drill-thru.
Inspired by some StackOverflow-ing and this particular answer.
Aka: “How do I pass/assign a default set of values to a multi-value parameter in SSRS?”
We often have an SSRS report that needs to “drill thru” to another report, usually to go from a “high level view” to a “lower level” or more detailed view, or sometimes just to lead the user down the path we’ve chosen as the head analyst / BI architect. And part of that report navigation involves initializing (set to defaults) the value(s) of the sub-report’s parameters, including multi-value parameters (which I’ll now refer to as mvp, even though, I know, it’s a ridiculously overused acronym). These are basically arrays, but are often represented as simply comma-delimited strings, so it’s easy to forget their true nature.
Let’s fabricate an example. In our Sales Summary report, we have an element (a textbox, image, placeholder, or whatnot) that, when clicked, should drill-thru to Sales by Person. Said next report requires a multi-select parameter (another term for the mvp) to have 1 or more values selected, otherwise it doesn’t render. We’ll call this parameter SelectedNames, with value-label pairings 1=Bob, 2=Alice, 3=Mary. When we drill-thru to this by-Person report, we want it to initially show (have selected by default) all available people.
So how do we do this? In the properties of the “clickable” element on Sales Summary, say it’s a text-box named GoToDetails, we go to the Action tab. We choose the “Go to report” action, select/specify the target report, Sales by Person, and then add the parameters we want to pass to it. For comparison, I’m going to pass a “regular” (single value) parameter called @ReportDate, as well as the mvpSelectedNames. Here’s what that all looks like, in picture form.
The single parameter pass-thru is, as you’d expect, very simple. But for our mvp, we need to use the expression-builder, that little fx button stylized to look like your old high school math class days. Here’s what that function looks like:
And presto!, we have converted a comma-delimited list into an array to pass into our Sales by Person report’s SelectedNames multi-value parameter. Now it will initially render with all 3 selected people as desired.
So there you have it, a quick tip for passing default values to an SSRS multi-value parameter during report drill-thru. But what if you wanted to pass the selected values of one mvp down to another? Or do some clever on-the-fly mapping (conversion) from one to the next? Well, stay tuned! I’ll write about that next time. =)
Thanks for reading! For a lot more on SSRS and multi-value parameters, check out these articles: @sqlchick, @mssqltips, and @msdn.