Because I’m already ridiculously late, I have a short one. This is about orphaned users — you know, when you restore a database and its users aren’t mapped to the server logins that they should be or used to be.
The typical solution is sp_change_users_login with the auto_fix or update_one option. But guess what? Yep, that’s deprecated. By the way, did you know that it also has a report option? Apparently that’s got some bugs…ish? Try it sometime and see — compare the output of sys.sp_helpuser where the ‘LoginName’ column is null, with sp_change_users_login 'report'. Preferably on a DB you’ve restored from another server. 😉
So what’s the correct solution? ALTER USER [theUser] WITH LOGIN = [theLogin]. Simple, no? Let’s get more general. Could we come up with a half-decent way do apply this kind of fix dynamically? Well sure, the nice folks at DBATools have already solved that problem. And that’s great, really. But just in case that doesn’t work… ^_^
One of the many things I love about SQL Prompt is the right-click option to “Script as INSERT” (from the results grid). This is a quick & easy way to built a temp-table for the results of an exec statement so you can do the ol’ insert #tmp exec sys.sp_blah ! Then we can query the list of DB users for the null LoginNames and write a little set of queries to fix them! Sound good?
PS: Coincidentally, today’s (Thursday) SQL Server Central newsletter featured a very similar post by a gentleman over at Madeira Data. Go check it out, it’s another great solution to this problem! And while you’re at it, get the SQL Server Radio podcast (created by a couple guys from the same company) – it’s a terrific addition to your iTunes library.
If you have a core database using a different collation than the rest of the DBs around it, BAD THINGS HAPPEN.
And we’re back! Hi folks, thanks for being patient with my December hiatus. The holiday season is always a little hectic but this year it felt especially sudden. And hey, you all have better things to do than read a blog in between the home cooked meals and the family gatherings.. like sleep, shop, and go see all the new movies!
Thanks to both Pitch Perfect 3 and the latest New Year’s Rockin’ Eve, Britney’s “Toxic” is now stuck in my head, so that’s fun.
Some of you may not know this, but I’m a big movie nerd. Not like the weird “knows a bunch of obscure factoids about all the Tarantino movies” or whatever. But I do quite enjoy the behind-the-scenes / making-of stuff — what used to be called “bonus features” on DVDs (remember those things??) — whenever the wife will tolerate sitting thru them with me.
Our genre of choice is generally horror. Now, I’m gonna get nerdy on you for a bit; because there are several sub-types or horror, and I enjoy almost almost all of them. Campy, creepy, fun, found-footage, gory, spooky, slasher, supernatural, tense, psychological, revenge, deconstruction, possession. For the uninitiated, “deconstruction” is like 2012’s Cabin in the Woods — it pokes fun at the tropes while building on them in unique ways. Those are one of my favorite kind; that one in particular is definitely in my top 10 all-time.
So to kick off this year, before diving back into the technical stuff, I’d like to give you a coupe lightning reviews of some horror movies that we’ve watched that are perhaps underrated or you may have missed.
The Babysitter (2017) – comedy/deconstruction. A young preteen boy, whose parents are gone a lot, has a great friendship with his older teen babysitter, but one night decides to spy on what she and her friends do after he goes to bed. And well, crap hits the fan. Lots of fun, eye candy, and slapstick violence. 👍👍
Patchwork (2015) – campy/revenge. 3 girls are Frankenstein’d together and have to overcome their mental differences and physical struggles to piece together the perpetrator and hopefully exact some revenge. Superbly acted by the lead lady, plenty of violence and just enough funny bits to keep it going. 👍
Happy Death Day (2017) – slasher/deconstruction. Think Groundhog Day but with a college chick being killed by a masked marauder repeatedly. She must try to find out who it is before it’s too late! Somewhat predictable but still entertaining and engaging. 👍
Incarnate (2016) – possession/supernatural. A somewhat unique twist on the genre, a brain doc frees people from possession by mind-sharing & getting the person back in control of their own consciousness. Think Inception meets Exorcist. Very well-acted, convincingly scary demon, and nicely twisted ending. 👍👍
Demonic (2015) – creepy/found-footage. Bit of a misnomer, as it has nothing to do with demons; it’s about a ghost-summoning gone horribly wrong resulting in the deaths of all but 1 (ish?) member of the group that originally attempted said ritual. Frank Grillo is always on-point. Very engaging. 👍
Last Shift (2014) – gory/creepy/demon-y. Rookie cop gets stuck with the last watch in a soon-to-be-shut-down police station, chaos ensues. Literally, this is some crazy crap; scary and bloody. Original & vastly under-hyped, has an indie vibe but looks & feels professional-grade. 👍👍
Most of these should be stream-able. So check ’em out!
Now on to the SQL stuff.
Collations are Hard
If you ever have to integrate a vendor database into your existing environment, and the vendor ‘mandates’ their DB use a certain collation (which differs from the rest of your SQL instances / databases), run away screaming. Srsly.
Or convince your managers that you know better, and force it into the same collation as everything else you have to integrate with. Good luck & godspeed.
Let me give you an example. The ERP system is being upgraded, which of course means a new (upgraded) DB as well. Part of this upgrade seems to involve supporting case-sensitive searching/matching against name fields. To this end, the vendor insists that the DB should use a case-sensitive collation, namely Latin1_General_100_CS_AS. Problem is, the rest of your DB environment, in which a lot of stuff touches the ERP database (via joins, linked-server queries, etc.), uses the SQL default collation of SQL_Latin1_General_CP1_CI_AS.
If you follow the vendor’s mandate recommendation, guess what’s going to happen to your queries/views/stored-procedures that touch this & other DBs? Horrible things. Terrible performance degradation. Wailing a gnashing of teeth from the developers, business users, and customers.
Okay, I exaggerate. Slightly.
But it really does hurt performance, and I don’t feel like it’s talked about enough in the data professional community. In the next post, I’ll take this problem apart a little more and try to impart some of what I’ve learned from going through the pain of dealing with the aforementioned example.
Renaming a Database looks easy, but it’s really not. Unless nobody’s touching it. Let’s find out why!
Databases sometimes need to get a new name. Or you need to swap names. It can be a royal PITA if the DB is a mission-critical always-being-used everybody-and-their-mother-touches-it-all-the-time thing.
Errors like “The database must be in a state in which a CHECKPOINT can happen” or “Cannot obtain exclusive access to the database” or “This transaction was chosen as the deadlock victim” abound. Sometimes it’s better to fall back on our old friends, detach & attach.
Let’s paint a picture!
Our ERP database has been chosen by the IT gods to get moved to the shiny new flash storage array, off the old spinning-rust SAN. This is fantastic news for the business users. But lo, the executives warn us, “You must do this with no downtime!” (said in my best Brent Ozar PHB-imitation voice). Of course when we tell them that’s impossible, they say, “OK, you must do this with minimal downtime.” That’s mo’ betta’.
So what are our typical options for doing a database migration? Or, more specifically, a data file migration. See, we’re not moving to a new server, and we’re not moving a bunch of databases together; we’re just moving this one ERP database. And we’re keeping it on the same SQL instance, we’re just swapping the storage underneath. Now yes, in a perfect world, both storage arrays (both SANs) would inter-operate and they’d be able to take a snapshot, pass it over, spin it up, swap it out, and our SQL instance would be none-the-wiser. But alas.
Other options? Log-shipping, sure; it’s been around forever, it’s very mature, simple, and relatively easy to operate, even if it lacks a bit in terms of automation capability. But let’s face it, the DBA is going to be sitting in front of his/her screen babysitting this operation no matter what, so that’s no biggie. Mirroring? Meh. Expensive Edition’s still got all the hot action there, and we’re not that fancy. Availability Groups? Sure, if we already had them set up & running. But we don’t. Sad-trombone.
Back to Basics
So we circle back to the classics, “backup & restore”, “detach-move-attach” routines. But there’s a hidden gem of a hybrid here: We can backup, restore as a new name, then switch the names! The catch is, we need to put the original DB in read_only mode while we do this, to prevent new/updated data from coming in. Now, when we ask the Dev-managers if their overlaying apps/websites can handle the DB being in read-only mode for an hour, they’re much more inclined to say OK to that, as opposed to the DB being actually ‘down down’.
Now, there’s a way to do this in T-SQL, and there’s a way to do this in PowerShell. Both are fine! I’m not arguing for one or the other — pick a flavor and go with it. This happened to be a day where I was feeling SQL-ish, not PoSh, hence the flavor I chose.
In my example we have our “slow” storage drives ‘D’ and ‘L’ (for Data and Logs, respectively). We’re moving our blessed database to “fast” flash storage drives ‘E’ and ‘M’ (you know, because they come after ‘D’ and ‘L’). The team managing the dependent apps have given their blessing to have a ‘read-only mode’ maintenance window for the database, as long as the actual ‘downtime’ is mere seconds.
Once the DB is in READ_ONLY, we do a backup,preferably to the flash storage so that it’s faster (and the restore, from flash to flash, will be super-fast!). We then restore that backup to a new DB on the flash storage drives, and, as expected, it’s speedy-gonzales.
Once our two copies of the database are up, we’re ready to take that minuscule down-time. We kick the users out of it (with SINGLE_USER mode), detach both DBs, and re-attach them (create database .. on (file), (file) .. for attach) with different names! Presto-change-oh, the ‘original’ DB is now resident on flash storage, and the old slow files are now a DB called “whatever the heck you want it to be, because it doesn’t matter anymore!”.
Disclaimer: I in no way support nor condone the naming of databases with spaces or special characters. If you actually name a database something like that, no matter how temporarily, I will find you, and throw sporks at your keyboard.
Assuming you didn’t run into any crazy locking problems or sticky-sessions that denied your attempt to force SINGLE_USER mode or detach the DBs, you should be good to go! I even considered building all this into an Agent Job, but… I’m worried it would get tripped up and leave the DBs in a half-arsed state that wouldn’t make our Dev-managers very happy. But hey, that’s what we have test environments for, right?!?
In SQL, TRIGGERs are often a subject of contention. Some data professionals have strong opinions about them, especially on the “they’re bad!” side. But like anything, they’re a specific kind of tool that has a place in a specific problem. One of the common scenarios for their usage is what we call an “Audit trail” — a running log of any & all changes made to a table or data-set (or even a database schema, as RedGate DLM Dashboard employs).
Yes, there are newer product features built around this very requirement, such as Change Data Capture (CDC) and Temporal Tables, but they require particular versions and editions of the product, and not every environment is going to be there. So we need something native and compatible to the lowest common/supportable denominator.
Like most areas of the database environment, triggers are often poorly documented. So what happens when we need to review all the triggers in a database and show what tables they’re on & what columns they’re “tracking” (sticking with the audit-trail example)? Or, more generally, you could say, we want to find all table & column dependencies of our triggers. Well sure, we could use the SSMS Object Explorer GUI — find the table, the triggers, right-click and “Show Dependencies”.. but that’s not good enough, for two reasons: A) it doesn’t get down to the column level, and B) it’s not script-able.
System catalogs and DMVs to the rescue!
Disclaimer: this is heavy on MS Docs links. There was a StackOverflow Q/A that helped point me in that direction, but sadly I lost the tab and didn’t save the link.
In this script, I mostly focus on the UPDATE triggers, because they’re the most heavy-handed in terms of auditing. Whereas with a typical INSERT or DELETE, the triggered audit action is basically “Hey, a row was inserted/deleted, here’s its primary key” — and sure you can get fancier and log what all the values were in the entire row — those are fairly straightforward. It’s basically “all or nothing”. In contrast, with an UPDATE audit, we want to know just what was updated, and from what (old value) to what (new value).
So the goal of documenting the dependencies is to see which columns we care about — because, inevitably, there are some columns that we either “don’t care about” for auditing, or some that just plain don’t change (by their nature — they’re an identity or a primary key, or controlled via other means). In turn, this helps us understand how to reproduce these audit mechanisms when a better technology comes along — such as CDC or temporal tables, as alluded to earlier.
We start with the system catalog views sys.tables and sys.triggers. This establishes our primary relationship — a trigger‘lives’ on (is attached to) a single table. Then we immediately step into murky water. A trigger, like other programmable objects, has a definition script, which you can view in a myriad of ways — Object Explorer, sys.sp_helptext, sys.sql_modules, sys.syscomments (deprecated), or OBJECT_DEFINITION(OBJECT_ID). So if we really wanted to go spelunking, we could just read the trigger’s definition and see what columns it deals with. But after about 2 or 3 of those, our eyes are going to glaze over and our fingers will start to twitch. We need something better, something that can handle hundreds of triggers and output a nice unified result!
Oh wait, what’s this? Object Explorer has a “View Dependencies” function.. aw, bummer, it doesn’t show columns. Just tables. As we know, most of the GUI functionality is driven by existing system meta-data/catalogs/dmv’s, so there’s definitely something that keeps track of these dependencies. Let’s hazard a guess… sys.sp_depends or sys.sql_expression_dependencies? The former is deprecated, the latter doesn’t give us a way to get to columns either. But thankfully, on the docs, the deprecated note points us toward the correct path: sys.dm_sql_referenced_entities and sys.dm_sql_referencing_entities.
Now we’re cooking with gas! So how do I use these suckers? Well, there’s a couple intermediary joins I need to go thru, and in fact these are functions, not views, so we need to know the proper arguments. But fear not, I have done the hard work for you! Check out the code and let me know what you think.
But wait, there’s more! There’s a slight monkey-wrench here. You see, in the column dependencies, we’re getting both the “audited” columns and the “key” columns. But we’d like those to be distinguished from each other. When we document and/or replicate the auditing using newer technology, the primary keys aren’t what we need to track (they’re in a sense “given”, known) — we need to know which columns are “audit-able” and which aren’t. Hence the two queries with a UNION ALL.
So there you have it, documenting some audit-trail triggers. Or triggers in general. Complete with the tables & columns on which they depend. Enjoy! ‘Til next time folks.
I now present to you, a recent adventure in building & configuring a SSRS 2016 / VS2015 project to replace our aging SSRS 2008R2 / VS2008 setup. In trying to make things ‘better’, I wanted to centralize the storage of the Data Sources. I found a tip on StackOverflow with a neat (albeit hacky) idea on how to do this. So I started diving into it. Here’s how that went.
I have a bunch of published reports on a ‘real’ report server (SSRS instance), which were created off-the-cuff using Report Builder 3.0 (no source-control connection or anything). So the first thing I had to do was go and download each RDL (thanks, Microsoft… ugh!), to my local VS2015 project folder, so I could bring them into my SSRS project. I didn’t bother trying to download/copy the Data Sources (they’re basically just connection-strings, and anyway you can’t download them from Report Manager), so I settled for re-creating those in VS.
Then it was time to set up the solution & projects. I prefer to organize my reports in context-relevant folders, like “Operations”, “Marketing”, etc. In order to do this sensibly, within the VS solution, you need to create a Project for each major fold you’ll have. Makes sense; this is how the old solution was constructed too. Here’s where I tried to improve things: I created a “Datasources” project, which would house just the shared Data Sources. Here’s where I created my new data-sources, setting them up with saved SQL auth (login & pwd). Sure, maybe not the “best practice” but it keeps things simple — my SSRS user only ever has read-only db access, and the pwd is managed well enough to keep auditors happy.
Thus, my plan (in following the SO tip) was to have all the other projects’ Data Sources be pointers to these shared Data Source files (RDS’s). Sounds good in theory, right?
Starting to Implement
Well. I started small with just one project, one report, and one data source. I went into my Nate test project, did “add existing item” into the Reports folder, and browsed to pick the rdl that I had just downloaded from my live SSRS server. I then went to my Datasources project, copied the rds that I needed, pasted it into Nate test project’s Shared Data Sources folder.
Now at this point there are 2 copies of this file in my solution folder. We don’t want that. So I opened up trusty ol’ Notepad++ on the Nate test.rptproj file, and edited the XML node under <DataSources><ProjectItem><Name>My-Source.rds</Name><FullPath>My-Source.rds</FullPath></ProjectItem></DataSources> , changing the FullPath node value to <FullPath>..\Datasources\My-Source.rds</FullPath>. I then deleted the physical copy of the file at \MySolution\Nate test\My-Source.rds , so that only the shared one physically remains in the filesystem (\MySolution\Datasources\My-Source.rds).
Another way to accomplish this, you may read, is to right-click Shared Data Sources within your project, e.g. Nate test, and say “Add existing item..”, and go select the global shared RDL from your master Datasources project-folder. However, this still results in a physical copy of the file within Nate test folder! So in either case you’ll end up doing some editing & deletion.
With me so far? Okay, here’s where it gets interesting. I could successfully build and deploy this report to my SSRS server, verifying that it showed my “new thing” (I added a text-box that simply said “This is the new hotness!”). But I could not preview the report in VS! Sad panda. And of course, the most vague generic error message in the world:
An error occurred during local report processing
An error occurred during report processing
More Googling led to a somewhat random tip in the vein of “Go to the report properties, Open the report’s Data Source, and re-select the Shared Data Source from the dropdown”. Combine with “Edit the Shared Data Source, ensure your saved login/pwd is filled in, and Save it”. Presto! Previewing now worked.
But why is this? Well, as it turns out, it’s not all that surprising. The RDLs are stored without the saved login/pwd, because of course they’re just plain-text XML files. So to “protect you”, MSFT doesn’t store your SQL auth credentials, even if (and now matter how many times) you click “Save my password” on the connection-properties window (connection-string builder).
Great. Well, I did that once, for Nate test project. Let’s see what happens with another project (aka report folder), say Test again. So I add an existing RDL, I copy-paste the Data Source from the global shared Datasources project, manually edit the rptproj in Notepad++, delete the copied rdl, etc. I then try to preview the report… and behold, the same maddeningly vague error message!
“Well duh!” you may think to yourself, “the credentials still aren’t stored anywhere un-encrypted!”. Yes, dear read, you are correct. But do I want to right-click on this project’s Data Sources, edit it, fill in the credentials, save, rinse, repeat, for every single project?! A resounding NO!
So what should I do? I could go type in the credentials to the actual connection-strings contained within the ‘master’ rds files. Of course then they’re in plain-text for all to see… but wait, do I care? Another no. Let’s go edit some connection-strings!
Why did MSFT choose to name the XML node ConnectString in their rds schema instead of ConnectionString? Don’t ask me!
Here’s the first gotcha: My reports (RDLs) need to be re-pointed at the correct Shared Datasource, because currently, as they were downloaded from my ‘real’ SSRS server — which had my desired top-level folder layout of “Datasources”, “Nate test”, “Marketing”, etc. — their Data Sources xml node points at \Datasources\MyDatasource.rdl. This was the correct path on the SSRS server and still will be the correct path when all is said & done. BUT, to support local debugging/previewing, the build output has to make a copy of the rds in the Project’s bin\Debug folder, which it will happily do, and then just as happily proclaim Could not find a part of the path 'C:\Users\Nate\Documents\Visual Studio 2015\Projects\MyReportSolution\Nate test\bin\Debug\Datasources\MyDatasource.rdl'. — because DUH, the build system didn’t make you a ‘Datasources’ sub-folder under the Debug folder, why would it!? So by either manually editing the rdl file to remove the \Datasources\ path from the node, or by clicking on the Report’s Data Sources node in the Report Data pane and re-pointing it at the local Shard Data Source (which again, is itself a pointer to the globally shared Datasources!), you can fix this issue.
In the rdl XML, that looks like this (using strikethru to show what you remove):
And there’s another gotcha: After we do all this, if we open the rds from within VS solution-explorer, it still won’t “think” it saved the authentication! But the connection-string will be clearly visible with the plain password. Here’s a couple pictures of what that looks like.
Oh, by the way, did I mention how many times VS crashed on me while doing these gymnastics? At least a few.
So, what have we learned? Well, for one, this is a crappy situation born of poor in-product support. I should be able to configure Solution-level shared Data Sources, use them in as many Projects (within said Solution) as I want, and have VS configuration management support them; bonus points for doing so with saved & encrypted credentials. Ideally, when we check this into source-control, we’d check in the “DEV” environment flavor connection-configs. Then, when the reports get deployed to the “PROD” SSRS server, the same globally shared Data Sources are already present (and they don’t get over-written, thankfully by default!), configured by the DBA with prod credentials, and nobody in the development pipeline needs to know said credentials. Yay?
But alas. We live in an imperfect world. I also live in a world where I’m both the Report Developer, the Report Manager, and the Production DBA. So guess what I get to do? That’s right, not care about the connection-string security!
Okay, yes, I care a little. Depending on my mood. But not enough to turn my whole dang world upside-down trying to design and implement a robust multi-tiered-environment solution for one measly SSRS repository. Kudos to you if you do — you’re a better man than I.
Or “better woman”. But I’m not a woman. So I guess the correct phrasing would be “You’re a better woman than I am a man.” No, that’s not right. “You’re a better person than I am?” Sure, I guess that works.
Create the solution & projects
Create the global shared Datasources project, and create your Data Sources (RDS‘s)
Import your Reports (RDL‘s) to each requisite report-project
Manually edit your rptproj to point the data-sources at the central shared path
Manually edit your RDL‘s to remove previously established paths to the data-sources
Configure your rptproj‘s to output to the correct folders on the report server — this should be done by default, i.e. VS has the “intelligence” to guess these for you, but just double-check.
And now to go automate all this with PowerShell… right after this beer.
Anyway. Until next time, folks! If I’ve made any mistakes, missteps, or otherwise offending your technical sensibilities, and/or if you know of a better way to accomplish what I’m after, I’d love to hear from you in the comments! =)
This month’s invitation is brought to you by Ewald Cress (blog, twitter), who I already like based on his tagline —
finds joy in minutiae..
Yes, my friend, don’t we all.
The topic at hand is fairly non-technical, but still important: folks who have made a positive contribution to your career or professional development. So it’s time for a shout-out! About a year ago, I wrote about my first major career move. There were several great influences in my first job, from the developers that taught me how to code, to the DBA who taught me how to keep cool & calm in the face of outages, to the boss who taught me the importance of time management and breadth of knowledge.
Since I am way too late in posting this, and I don’t feel like waxing poetic, I’ll just say a general “thank you” to all those who’ve helped me along in my career so far, with special acknowledgement to my former boss, my current boss, the SQL family, and my own family. Happy belated Thanksgiving and have a safe & pleasant holiday season! I’ll have a real post again quite soon, diving back into the tech stuff.
Wait for the insurance co. to send your copy of REG 481, “Salvage Vehicle Notice of Retention by Owner”. They submit this to the DMV for you as well — but it helps to have a copy on-hand when you go in.
Get form REG 343, “Application for Title or Registration”. Fill out sections 1, 2, 4, and 9 (at least; others if applicable).
Get form REG 488c, “Application for Salvage Certificate or Nonrepairable Vehicle Certificate”. Fill out section 1 with your info (applicant) & your insurance co’s info.’
Make the DMV appointment. Bring all of the above. The receptionist will be impressed that you’ve made it this far. =)
Technically, the only things you actually need are the title & inspection certs. The DMV receptionist can give you all the rest, assuming they’ve gotten the insurance notice (481) on file. As I said, it doesn’t hurt to bring a copy. The receptionist can also help you if you’re unsure of what sections to fill on the forms.
The receptionist will give you REG 156 for your license plate exchange. You can just fill this out while you wait for the vehicle inspection, or to be seen by the next agent.
They’ll do the vehicle inspection, and the inspector will fill out REG 31.
With all these papers in hand, you’re finally ready to perform the transaction! You’ll pay the salvage title fee and the inspection fee, exchange your plates for new ones, and get a new registration card & stickers.
Congratulations, you now own your P.O.S. / clunker / beater / whatever term of endearment you choose to call your beat-up-yet-still-running car!
Here are some fun sample pictures of the paperwork.
As it turned out, some of the forms that I’d filled out ahead of time were completely unnecessary, while others were redundant or replaced. The thing that took the longest was waiting for the DMV to be notified that the vehicle was a salvage; apparently they’re a bit backlogged.
Here’s another little bit-o’-fun. The front license plate on the Honda (remember, I said part of the process is giving the plates over to the DMV in exchange for new ones?) is a biatch to remove without proper tools. I borrowed a standard pair of pliers from the nice young man behind the desk and struggled out there with the hex-nuts for nearly 15 minutes before he came out and said “Dude, don’t worry about it, we’ll call it destroyed”. FYI, the proper tool is a socket set with both SAE & metric, somewhere between 3/8 inch and 11mm. Apparently whoever installed this plate couldn’t decide between the two measurements systems so he/she used some of each.
Keeping your salvage vehicle does cost a bit, and is a small hassle. But in the end, it can be worth the trouble, IF:
You are able to get it repaired for a small portion of the total-loss offer (what your insurance pays you)
You don’t care about how it looks (because that’s usually what makes the repair job much cheaper — not caring about the body work!)
You don’t ever plan on selling it again (because that’s what the DMV make sure of when they register it as a salvage)