Because I’m already ridiculously late, I have a short one. This is about orphaned users — you know, when you restore a database and its users aren’t mapped to the server logins that they should be or used to be.
The typical solution is sp_change_users_login with the auto_fix or update_one option. But guess what? Yep, that’s deprecated. By the way, did you know that it also has a report option? Apparently that’s got some bugs…ish? Try it sometime and see — compare the output of sys.sp_helpuser where the ‘LoginName’ column is null, with sp_change_users_login 'report'. Preferably on a DB you’ve restored from another server. 😉
So what’s the correct solution? ALTER USER [theUser] WITH LOGIN = [theLogin]. Simple, no? Let’s get more general. Could we come up with a half-decent way do apply this kind of fix dynamically? Well sure, the nice folks at DBATools have already solved that problem. And that’s great, really. But just in case that doesn’t work… ^_^
One of the many things I love about SQL Prompt is the right-click option to “Script as INSERT” (from the results grid). This is a quick & easy way to built a temp-table for the results of an exec statement so you can do the ol’ insert #tmp exec sys.sp_blah ! Then we can query the list of DB users for the null LoginNames and write a little set of queries to fix them! Sound good?
PS: Coincidentally, today’s (Thursday) SQL Server Central newsletter featured a very similar post by a gentleman over at Madeira Data. Go check it out, it’s another great solution to this problem! And while you’re at it, get the SQL Server Radio podcast (created by a couple guys from the same company) – it’s a terrific addition to your iTunes library.
Renaming a Database looks easy, but it’s really not. Unless nobody’s touching it. Let’s find out why!
Databases sometimes need to get a new name. Or you need to swap names. It can be a royal PITA if the DB is a mission-critical always-being-used everybody-and-their-mother-touches-it-all-the-time thing.
Errors like “The database must be in a state in which a CHECKPOINT can happen” or “Cannot obtain exclusive access to the database” or “This transaction was chosen as the deadlock victim” abound. Sometimes it’s better to fall back on our old friends, detach & attach.
Let’s paint a picture!
Our ERP database has been chosen by the IT gods to get moved to the shiny new flash storage array, off the old spinning-rust SAN. This is fantastic news for the business users. But lo, the executives warn us, “You must do this with no downtime!” (said in my best Brent Ozar PHB-imitation voice). Of course when we tell them that’s impossible, they say, “OK, you must do this with minimal downtime.” That’s mo’ betta’.
So what are our typical options for doing a database migration? Or, more specifically, a data file migration. See, we’re not moving to a new server, and we’re not moving a bunch of databases together; we’re just moving this one ERP database. And we’re keeping it on the same SQL instance, we’re just swapping the storage underneath. Now yes, in a perfect world, both storage arrays (both SANs) would inter-operate and they’d be able to take a snapshot, pass it over, spin it up, swap it out, and our SQL instance would be none-the-wiser. But alas.
Other options? Log-shipping, sure; it’s been around forever, it’s very mature, simple, and relatively easy to operate, even if it lacks a bit in terms of automation capability. But let’s face it, the DBA is going to be sitting in front of his/her screen babysitting this operation no matter what, so that’s no biggie. Mirroring? Meh. Expensive Edition’s still got all the hot action there, and we’re not that fancy. Availability Groups? Sure, if we already had them set up & running. But we don’t. Sad-trombone.
Back to Basics
So we circle back to the classics, “backup & restore”, “detach-move-attach” routines. But there’s a hidden gem of a hybrid here: We can backup, restore as a new name, then switch the names! The catch is, we need to put the original DB in read_only mode while we do this, to prevent new/updated data from coming in. Now, when we ask the Dev-managers if their overlaying apps/websites can handle the DB being in read-only mode for an hour, they’re much more inclined to say OK to that, as opposed to the DB being actually ‘down down’.
Now, there’s a way to do this in T-SQL, and there’s a way to do this in PowerShell. Both are fine! I’m not arguing for one or the other — pick a flavor and go with it. This happened to be a day where I was feeling SQL-ish, not PoSh, hence the flavor I chose.
In my example we have our “slow” storage drives ‘D’ and ‘L’ (for Data and Logs, respectively). We’re moving our blessed database to “fast” flash storage drives ‘E’ and ‘M’ (you know, because they come after ‘D’ and ‘L’). The team managing the dependent apps have given their blessing to have a ‘read-only mode’ maintenance window for the database, as long as the actual ‘downtime’ is mere seconds.
Once the DB is in READ_ONLY, we do a backup,preferably to the flash storage so that it’s faster (and the restore, from flash to flash, will be super-fast!). We then restore that backup to a new DB on the flash storage drives, and, as expected, it’s speedy-gonzales.
Once our two copies of the database are up, we’re ready to take that minuscule down-time. We kick the users out of it (with SINGLE_USER mode), detach both DBs, and re-attach them (create database .. on (file), (file) .. for attach) with different names! Presto-change-oh, the ‘original’ DB is now resident on flash storage, and the old slow files are now a DB called “whatever the heck you want it to be, because it doesn’t matter anymore!”.
Disclaimer: I in no way support nor condone the naming of databases with spaces or special characters. If you actually name a database something like that, no matter how temporarily, I will find you, and throw sporks at your keyboard.
Assuming you didn’t run into any crazy locking problems or sticky-sessions that denied your attempt to force SINGLE_USER mode or detach the DBs, you should be good to go! I even considered building all this into an Agent Job, but… I’m worried it would get tripped up and leave the DBs in a half-arsed state that wouldn’t make our Dev-managers very happy. But hey, that’s what we have test environments for, right?!?
a quick tip for passing default values to an SSRS multi-value parameter during report drill-thru.
Inspired by some StackOverflow-ing and this particular answer.
Aka: “How do I pass/assign a default set of values to a multi-value parameter in SSRS?”
We often have an SSRS report that needs to “drill thru” to another report, usually to go from a “high level view” to a “lower level” or more detailed view, or sometimes just to lead the user down the path we’ve chosen as the head analyst / BI architect. And part of that report navigation involves initializing (set to defaults) the value(s) of the sub-report’s parameters, including multi-value parameters (which I’ll now refer to as mvp, even though, I know, it’s a ridiculously overused acronym). These are basically arrays, but are often represented as simply comma-delimited strings, so it’s easy to forget their true nature.
Let’s fabricate an example. In our Sales Summary report, we have an element (a textbox, image, placeholder, or whatnot) that, when clicked, should drill-thru to Sales by Person. Said next report requires a multi-select parameter (another term for the mvp) to have 1 or more values selected, otherwise it doesn’t render. We’ll call this parameter SelectedNames, with value-label pairings 1=Bob, 2=Alice, 3=Mary. When we drill-thru to this by-Person report, we want it to initially show (have selected by default) all available people.
So how do we do this? In the properties of the “clickable” element on Sales Summary, say it’s a text-box named GoToDetails, we go to the Action tab. We choose the “Go to report” action, select/specify the target report, Sales by Person, and then add the parameters we want to pass to it. For comparison, I’m going to pass a “regular” (single value) parameter called @ReportDate, as well as the mvpSelectedNames. Here’s what that all looks like, in picture form.
The single parameter pass-thru is, as you’d expect, very simple. But for our mvp, we need to use the expression-builder, that little fx button stylized to look like your old high school math class days. Here’s what that function looks like:
And presto!, we have converted a comma-delimited list into an array to pass into our Sales by Person report’s SelectedNames multi-value parameter. Now it will initially render with all 3 selected people as desired.
So there you have it, a quick tip for passing default values to an SSRS multi-value parameter during report drill-thru. But what if you wanted to pass the selected values of one mvp down to another? Or do some clever on-the-fly mapping (conversion) from one to the next? Well, stay tuned! I’ll write about that next time. =)
Thanks for reading! For a lot more on SSRS and multi-value parameters, check out these articles: @sqlchick, @mssqltips, and @msdn.
What could possibly go wrong? As it turns out, plenty.
Faithful reader(s), it’s been a while! I’ve been busy preparing for some big transitions. I’m also getting better at MDX queries, tweaking SSAS-based reports to more accurately reflect the business rules. But enough about that, on with the post!
In which we doubt the SAN
A storage area network (SAN) is a management & administration solution, not a performance solution.
-someone wiser than me
SANs are wonderful technology. They inspire all kinds of geekery and are purported to solve all your storage woes. But there’s a catch: they’re expensive. Not just as a capital expense, but in maintenance and licensing costs. And if you ever want to upgrade it, like add some more drives to a particular tier/pool — fuhgeddaboudit.
So what do we do with SQL on a SAN? Well, it has tiers, right? Slower storage with huge capacity, faster storage with less, etc. We put the data files (heavy random read workload, typically) on the pool optimized for that kind of I/O pattern. We put the TLog files (heavy sequential write workload) on the pool best suited for that. And what about good ol’ TempDB? Its access pattern is fairly unique — random writes and reads, and frequent overwrites, which means it could potentially wear out your typical prosumer SSD relatively quickly. But we’re not complete cheapskates, we’ll buy enterprise class SSDs, no?
So we go read some stuff and figure, hey, sounds like a great idea, right? Put TempDB on a local SSD, or better yet, a pair of SSDs in RAID-0 for pure performance (because this is a cluster, we’ve got HA already). We’ll reduce the load on the SAN I/O channels and make our overworked TempDB happier with lower latency and better throughput. Right?
In which we discover what could possibly go wrong.
Once the new drive(s) is(are) installed and “presented” to Windows (that’s my SysAdmin’s term), it’s fairly trivial to do the SQL configuration change — it does of course require a SQL service restart (or cluster failover). Code example, assuming your new drive is ‘T’:
alter database tempdb
modify file (name=tempdev, filename='T:\tempdb.mdf')
alter database tempdb
modify file (name=tempdb2, filename='T:\tempdb2.ndf')
You do of course have multiple TempDB data files, yes? Good.
Should we put templog (TempDB’s transaction log) on the same drive as the TempDB data files, or put it on the same storage pool as the regular DBs’ TLogs? As usual, “it depends” — ask your favorite SQL gurus and do some testing.
Back on topic
We’ve made the change, we’ve done the cluster failover. TempDB is now running on our spankin’ new SSD. So we start monitoring performance metrics. Things like file I/O stats (from SQL DMV sys.dm_io_virtual_file_stats), latency and waits (from our monitoring tools), and good ol’ PerfMon.
But wait, what’s this? I/O stalls are higher? Write latency is higher?!? Perfmon agrees?
Write latency on the TempDB files was over 10x higher than it was when they were on the SAN (the performance tier, to be clear). The file_stats DMV showed large increases in I/O stalls. Sad-trombone.
In which we have several theories
Then ensued various conversations and brainstorms among my colleagues.
Someone check the firmware/drivers!
Maybe it’s got the wrong block-size.
Well, it’s only 6Gbps SAS… maybe we should’ve sprung for the 12Gbps.
The write latencies went up by a factor of 10. I don’t think an improvement by a factor of 2 is going to win you any trophies.
Why didn’t we get an NVMe or M.2 one?
Because the damn blades don’t have those slots, goober.
Another interesting observation, and potentially the silver lining. Overall instance waits (wait stats), according to our monitoring tool, went down. That’s good news, right? Maybe. Does application performance & user experience corroborate it? Possibly! We’ll be observing the patient for another week or so.
Let’s turn to the community again to see what others have experience.
Oh wait, it might not be such a fantastic idea after all.
And by “we” I mean “me”. Being the DBA and the primary proponent of the SSD addition, because I knew our workloads were very TempDB-heavy, I had to hang-tail and admit that the SAN gods won this round.
But wait, what about the fact that our wait stats are down? What about app/user experience? Valid arguments, I agree. That’s why we’re still observing. But I’m not optimistic, given the follow-up links above. We may utilize local SSDs for something else (index filegroups?) — but if those write latencies don’t improve, I’m concerned that it won’t help anybody.
In which I ask for your help
Yes, you! If you have ideas on what we did wrong, what we’re missing, or any other advice about getting the most “bang for the buck” out of a direct attached SSD on a converged-infrastructure Cisco UCS blade server platform with a VNX SAN, by all means, drop me a line. I’m all ears.
We think the total operation will take several hours, possibly days. So we want to be able to check up on it.
While I try to compose my resources and samples for the previoustwo teasers, I found this to be a particularly interesting problem and thought I’d share my solution.
We have a very large table, half a billion rows. It needs to be copied (well actually, indexed, but that’s another part of the discussion). We know that standard TSQL would suck for this, even if we wrote a batch-loop proc for it (sure, it might be less blocking, but it’ll take forever). Plus, we might be going between two different servers (well, not in my case, but you very well could be, and in fact that’s probably the more likely case, given the implications of such a large operation). SSIS to the rescue!
Now, again, we’re lazy, so we use the Import/Export Data wizard (a component of SSIS) to build the .dtsx package. Then we can schedule it via a SQL Agent Job.
Sure, we could build it with Visual Studio and/or BIML and be super-awesome, but where’s the fun lazy in that?
Based on some preliminary estimates of the throughput using SSIS for a small sub-set of the data, we think the total operation will take several hours, possibly days. So we want to be able to check up on it — to see its status & elapsed time, and get an estimate of time remaining. Sound good?
Two assumptions going into this, to keep the example simple.
We know the Agent Job’s scheduled start time, and it does start on-schedule.
We’re only dealing with 2 tables — 1 source, 1 destination. Furthermore, they’re both on the same server instance; or if not, we can query a linked-server connection once to get the “source” table size-stats, because they won’t change.
(Okay that was slightly more than 2, but again, simple. One can imagine expanding this to multiple tables & multiple instances, but then at some point you’re going to need a “witness” that can talk to all the disparate sources of data and conglomerate those bits & pieces together, and then you have to ask yourself “is it really worth it, or can I just give a SWAG & move on with my day?”)
Before I move on: helpful SOanswer that reminded me how surprisingly-not-that-difficult (and-still-pretty-efficient) it is to convert a time interval (seconds, in this case) in a “human friendly format” like Days.hh:mm:ss. Yay.
One key component of any kind of “check-up” or general monitoring solution is, you need it to be light weight, i.e. have low overhead. We don’t want our method of monitoring the process to add some non-trivial extra load to that process (or the servers doing said processing). So let’s avoidCOUNT(*) shall we? Unfortunately, the top Google results don’t readily point to this, but MS actually gave us a handy-dandy built-in way of measuring this, sys.sp_spaceused. Yay again.
Granted, it doesn’t get you “up to the millisecond” accuracy, but it’s truly close enough for what we’re doing; and frankly, it’s your only sane option when asking the question “how many rows?” of a TOUS.
So we’re going to use the output of that system proc, specifically the rows column, to measure how much data has been bulk-copied into our destination table. We’ll also measure the source table, once, as stated, because (we’re assuming!) that won’t change.
Finally, we’ll use those row-counts, with some simple tricks & nonsense — basic math, dateadd/datediff — to calculate a percent-complete, elapsed time, estimated total time, and ETA (estimated time of arrival completion).