Let’s make it real simple for the end-user to get the ‘best’ behavior by default. Let’s sort by our ‘Revenue’ column in DEscending order first.
You ever have a paginated report, i.e. a standard old-fashioned report with a big table (tablix, matrix), where you want the users to be able to sort by any column they want? And let’s say one such column is ‘Revenue’ — some money amount where the “biggest is the most important”. Like your top customer, for example.
Now, you know about ‘Interactive Sorting’ right? No?!? Oh it’s awesome! Let me remind you. It’s super easy to configure: just get to the Text Box Properties of the header box of whatever column you want to use, & go to the ‘Interactive Sorting’ tab. Like so:
Make sure you pick the appropriate Field in the Dataset to “Sort by”. Hint: it’s usually the one you’re displaying in that column!
But you know what’s kinda annoying? You can’t dictate a “first sort direction” — it just assumes that the first time you click the sort-arrows, you want ‘Ascending’ (lowest first). Then you can switch to ‘DEscending’ (highest first). This makes perfect sense for alpha values (strings), but not always for numeric values — at least not when you’re dealing with money, when generally the highest dollar amount is the most important!
So let’s make it real simple for the end-user to get the ‘best’ behavior by default. Let’s try to make it sort by our ‘Revenue’ column in DEscending order first. Ready?
It’s so small in that picture you might miss it. I’m doing this, but I’ll add whitespace below for easier reading:
= - Fields!Revenue.Value
..^look, there's the minus sign!
(assuming your viewing device hasn't mangled the fixed-width text)
You could also get more pedantic/readable and spell-it-out like so:
Right? Beautiful. So what does it look like on the report?
Notice my silly hyphenated column names, like ‘Custom-er No’ and ‘Rev-enue’. That’s because those sort-icons take up space within the textbox, and you can’t control it, so the text wraps in a non-obvious manner. Thus, I purposefully added a hyphen and a line-break to the text, so I could control how they looked, instead of leaving it to the wiles of the SSRS render-er.
While there’s no native control like it, with a few simple tricks and visual slight-of-hand, we can fake it.
Today I present to you, a hopefully less confusing and more intuitive explanation of how to make a “multi-column list” control on your SSRS report. And if the term is confusing by itself, I’ll illustrate in a second.
Three links that helped me get going in the right direction:
They all roughly boil down to the same concept: placing redundant or repetitive controls (like a set of 2 or 3 tablixes), and using a RowNumber() expression with some modulo arithmetic to set “visibility” properties of the dataset rows shown in each.
We don’t quite get to “dynamic # of columns and rows based on the total # of records” nirvana (e.g. make it a 3×5 grid if you have somewhere around 15 records, but only make it 2×4 if you have 8 or less records, etc.), but I’d estimate that this approach would suffice in 90% of use-cases. And whoever at Microsoft decided to not build a control like this into the toolset, curse you!!
Yes, we’re not even past the intro section and I’ve already cursed someone at MSFT. Doesn’t change my passion for the platform in general; it’s just a minor curse. =)
SSRS uses a dataset to fill one or more controls with data; controls such as the tablix. In my regular implementations, a dataset is usually a stored-proc. The tablix shows the results of the proc in tabular form. The problem with the tablix control is that you can’t turn it into a multi-column layout, as you might see with some more advanced UI controls. You might think that the matrix control would be suitable for this, but no; that is just a table-like structure that can support hierarchical column-groups and row-groups.
For example, let’s say you have a table of Users, with columns ID, Name, and Role. Like so:
ID | Name | Role ---|---------|-------- 1 | Bob | Admin 2 | Alice | User 3 | Jack | User
But with many more records. We want to display this on a report, which is typically viewed on a PC screen, i.e. landscape orientation (wider than tall). So wouldn’t it be nice if we could display it like so?
ID | Name | Role || ID | Name | Role || ID | Name | Role ---|---------|--------||----|---------|--------||----|---------|------- 1 | Bob | Admin || 2 | Alice | User || 3 | Jack | User
etc. Right? Right.
The actual work is fairly simple. We add 3 tablixes to the report and place them side-by-side. We connect them all the same dataset, i.e. stored-proc — which will only be executed once, conveniently! On the data row (as opposed to the header row) of each tablix, right click to Row Visibility. Then use an expression like so:
=IIf(RowNumber(Nothing) Mod 2 = X, False, True) — where X is going to vary from 1 to (# of columns) - 1, and finally to 0 for the last (right-most) tablix.
Remember, the expression defines the row’s Hidden property, not “Visible”. I know, it seems backwards, but that’s how it is.
In plain English, we mean, “if the row is the 1st row, show it, else hide it”. Likewise, for the second tablix‘s row, we’d say “if the row is the 2nd row show it”. And so on.
Not bad, right?
It’s certainly odd that there’s no native way of building such a report. But with a few simple tricks and visual slight-of-hand, we can fake it.
Go check out your AWS RDS SQL Error Logs! See what tickles your curiosity.
Today’s post is brought to you by an unplanned AWS RDS outage, the desire to search its SQL Error Logs for events during the target time-frame, and the horrendously inefficient interface which AWS gives you in the GUI console for reading said logs.
Even the dedicated ‘admin’ user that you create for your instance, doesn’t have permission to read the error logs via the SSMS GUI nor with xp_readerrorlog. You can, however, use sp_readerrorlog. That’s with an ‘S‘.
The parameters here are quite arcane, namely@1, @2, @3, @4. Unfortunately, none of them allow you to filter on a time-span (those are the 5th and 6th parameters of the underlying xp, which we don’t have access to, as per #1).
My solution involves a #temptable, a loop of insert / exec commands, and then whatever queries you desire to search/filter/analyze the results. I also throw in a conversion to local time (from UTC, which is what the AWS servers use).
Details and The Why
You can check out the script; it’s short enough to embed, IMHO, so I’ll just leave it right here.
Line 25-26 is particularly interesting to me, and only works with SQL 2016 and up. I got the idea from this StackOverflow answer. You can chain two AT TIME ZONE commands together to convert a given datetime value from one zone to another. The reason you still need the CONVERT is because the output of the AT TIME ZONE command is always a datetimeoffset type, which, while quite useful in its own right, has its quirks, and doesn’t serve our purposes for ease-of-readability.
If you’re not running 2016, at least in RDS, you’ve got nearly no excuse. The upgrade process is vastly simpler with RDS than with traditional on-prem servers. Although, I did run into my share of snags with it recently, which I’ll blog about later.
You should plug in whatever values suit your need & environment — the @NumLogFiles and @StartDate & @EndDate. I used 2-2:30am, because… well, that’s always when those damn outages seem to happen, ain’t it?
As I mentioned, “the Why” is basically because AWS RDS limits your permissions (even as an admin) in some key ways, and one of those limitations prevents you from reading the error logs in the more “normal” ways — SSMS GUI, xp_readerrorlog, etc. And the interface given to read the logs in the AWS console GUI is quite a sad-panda. They offer a wrapper proc rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1 , which really only serves the purpose of naming the parameters more nicely; under the hood it’s doing the exact same thing.
Of course we can’t prove that, because it’s encrypted, but the results are identical.
So there you have it. Go check out your AWS RDS SQL Error Logs! See what tickles your curiosity. =)
There are many great use-cases for the T-SQL APPLY operator, and if you haven’t tried it yet, I encourage you to check out Kevin Feasel’s excellent presentation on it here. Today, I want to demonstrate a particularly interesting use-case (at least, to me!) that I found for it while tackling a real business problem — for reporting, specifically, which is what I spend a lot of my time dealing with lately.
We have a “history of estimated market values” table, which stores, for some arbitrary dates based largely on user input, values (prices) of our proverbial Widgets. In other words, we can’t guarantee we have every date in there for a given time period. Over on side B, we have a “sales” table, which tells us when a given Widget was sold and for how much. The business question, simply enough, is “When a given Widget was sold, how accurate was its ‘estimated market value’ at the time?” Or, put another way, “How closely did the sale price of that Widget match our estimated market value?”
I used two interesting bits of TSQL to accomplish this, the main one being our lovely APPLY operator. OUTER APPLY, in this case, because I still wanted the rows from “sales” even if we couldn’t find a closely-matching “market value” record.
If you haven’t used this operator yet, think of it conceptually like a JOIN to a sub-query, but with the added value that you can reference the outer tables’ columns within it. CROSS APPLY is like INNER JOIN, and OUTER APPLY is like LEFT JOIN.
Sorry, RIGHT JOIN fans, no love for you here. You’re just too weird.
My other little trick involved using TOP(1) in the apply’d sub-query, with a non-standard ORDER BY clause. As I mentioned, within the APPLY‘s body, you can reference the outer table’s columns. So I’m easily able to compare the SaleDate (outer table) with the EstimateDate (inner query). I want “the closest EstimateDate to the SaleDate”, which means I want the row where the difference between those two dates is the smallest. Which means making use of our friend DATEDIFF. So let’s try:
ORDER BY DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)
Do you see the problem yet? If I get a negative value, i.e. my SaleDate is way before my EstimateDate, say -100 days, that’s the top 1 row. I don’t want that! Let’s try our old friend from many many math classes ago, Mr. Absolute Value.
ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, MktValueHist.EstimateDate)
That’s better! Now I have the top 1 “closest to my sale date” row from MktValueHist.
All Together Now
I’ll put up a Gist with repro/demo code soon. Here’s the meat of it, as a sample select, for convenience:
SELECT Sales.WidgetID, Sales.WidgetName, Sales.Price, Sales.SaleDate
, mvh.MarketValue, mvh.EstimateDate
, Accuracy = some_made_up_mathy_thing_here
OUTER APPLY (
SELECT TOP(1) mValHist.MarketValue, mValHist.EstimateDate
FROM MarketValueHistory mValHist
WHERE mValHist.WidgetID = Sales.WidgetID
ORDER BY ABS(DATEDIFF(day, Sales.SaleDate, mValHist.EstimateDate)
WHERE Sales.SaleDate >= '20180101'
There, my completely fabricated yet totally-based-in-reality example of how to get 2018’s Widget Sale Prices with corresponding “closest to the sale-date” Market Value Estimate from our history table. You could even throw in some fancy math expression for “accuracy”, if you felt like it. Like maybe “relative difference“. =)
Note: For simplicity, we’re dealing with “whole dates”, i.e. just date. Not datetime or datetime2. If your date values do include times, you’d want to change the datediffinterval to something more appropriate, like second or millisecond.
And that’s how APPLY and ABS() saved my bacon today. Hooray!
Have an interesting use-case for APPLY? Or perhaps an alternative approach to a similar problem? I’d love to hear from you! Drop me a comment, a tweet, or a LinkedIn msg. Happy summer!
PS: I’d also recommend checking out Steve Stedman’s useful “Join types poster“, if for no other reason than to have something concrete to use when explaining those concepts to your fellow cube-dwellers. It has cross/outer apply on the 2nd page in their more commonly used pattern, the TVF (table valued function) relationship.
I use a Central Management Server too, and although I don’t often use it to run a query against multiple instances, it’s definitely a handy built-in feature to take advantage of. A minor downside is that it only supports Windows Authentication (not SQL logins), so I can’t use it for my AWS RDS instances, nor for the CMS server itself — those I have to keep stored in my local “Registered Servers” section. Another tool for running queries against multiple instances, with a good deal more flexibility, is Red Gate’s MultiScript, though it’s not free. ;o)
Ethervane Echo, a clipboard manager and history-remember-er, is similar to something I use called Clipboard Fusion — in fact, it might even be better. And who doesn’t love dbatools and dbachecks ? If you’re not using them yet, don’t wait; start getting into PowerShell today by at least trying out some of the ‘get’ cmdlets from dbatools.
Telegraf looks absolutely stunning, as a monitoring system. It does take some setup work and some maintenance, but it’d be a great branch-out learning opportunity to touch a few different technologies that a traditional SQL DBA might not normally think of. Hats off to the people behind that, and may it continue to grow.
Leave it to Bert to go “outside the box” a bit, with these tools that help you be a better presenter and collaborator. I use BeyondCompare, which is similar to WinMerge (tho, again, not free); I’ve fallen in love with its features that go beyond file diff/merge, but it’s nice to have a free option whenever I’m not on my main machine.
This is a broad sweeping post but it captures a LOT of what the community is and should be. We’re inclusive, we want people to participate, grow & learn from each other, and ultimately advance their careers. Tons of useful gems in here, from the Slack workspace to the event links to the networking advice. Excellent stuff; go read it.
The SQL DB Modeler beta looks really interesting as an alternative to traditional big-$$$ tools like ER/Studio & Erwin. If only I wasn’t stuck in brown-field legacy data models 95% of the time… =D
And finally, although they’ve probably been mentioned a few times already, pastetheplan and statisticsparser are two amazingly simple tools from the Brent Ozar folks that make sharing and comparing query performance so much easier. My M.O. is to use PasteThePlan links in a dba.stackexchange post so that others can easily see the graphical execution-plan to offer feedback; while I use StatisticsParser to compare between A/B-testing runs of a stored-proc I’m trying to refactor & improve.
Because I’m already ridiculously late, I have a short one. This is about orphaned users — you know, when you restore a database and its users aren’t mapped to the server logins that they should be or used to be.
The typical solution is sp_change_users_login with the auto_fix or update_one option. But guess what? Yep, that’s deprecated. By the way, did you know that it also has a report option? Apparently that’s got some bugs…ish? Try it sometime and see — compare the output of sys.sp_helpuser where the ‘LoginName’ column is null, with sp_change_users_login 'report'. Preferably on a DB you’ve restored from another server. 😉
So what’s the correct solution? ALTER USER [theUser] WITH LOGIN = [theLogin]. Simple, no? Let’s get more general. Could we come up with a half-decent way do apply this kind of fix dynamically? Well sure, the nice folks at DBATools have already solved that problem. And that’s great, really. But just in case that doesn’t work… ^_^
One of the many things I love about SQL Prompt is the right-click option to “Script as INSERT” (from the results grid). This is a quick & easy way to built a temp-table for the results of an exec statement so you can do the ol’ insert #tmp exec sys.sp_blah ! Then we can query the list of DB users for the null LoginNames and write a little set of queries to fix them! Sound good?
PS: Coincidentally, today’s (Thursday) SQL Server Central newsletter featured a very similar post by a gentleman over at Madeira Data. Go check it out, it’s another great solution to this problem! And while you’re at it, get the SQL Server Radio podcast (created by a couple guys from the same company) – it’s a terrific addition to your iTunes library.
Renaming a Database looks easy, but it’s really not. Unless nobody’s touching it. Let’s find out why!
Databases sometimes need to get a new name. Or you need to swap names. It can be a royal PITA if the DB is a mission-critical always-being-used everybody-and-their-mother-touches-it-all-the-time thing.
Errors like “The database must be in a state in which a CHECKPOINT can happen” or “Cannot obtain exclusive access to the database” or “This transaction was chosen as the deadlock victim” abound. Sometimes it’s better to fall back on our old friends, detach & attach.
Let’s paint a picture!
Our ERP database has been chosen by the IT gods to get moved to the shiny new flash storage array, off the old spinning-rust SAN. This is fantastic news for the business users. But lo, the executives warn us, “You must do this with no downtime!” (said in my best Brent Ozar PHB-imitation voice). Of course when we tell them that’s impossible, they say, “OK, you must do this with minimal downtime.” That’s mo’ betta’.
So what are our typical options for doing a database migration? Or, more specifically, a data file migration. See, we’re not moving to a new server, and we’re not moving a bunch of databases together; we’re just moving this one ERP database. And we’re keeping it on the same SQL instance, we’re just swapping the storage underneath. Now yes, in a perfect world, both storage arrays (both SANs) would inter-operate and they’d be able to take a snapshot, pass it over, spin it up, swap it out, and our SQL instance would be none-the-wiser. But alas.
Other options? Log-shipping, sure; it’s been around forever, it’s very mature, simple, and relatively easy to operate, even if it lacks a bit in terms of automation capability. But let’s face it, the DBA is going to be sitting in front of his/her screen babysitting this operation no matter what, so that’s no biggie. Mirroring? Meh. Expensive Edition’s still got all the hot action there, and we’re not that fancy. Availability Groups? Sure, if we already had them set up & running. But we don’t. Sad-trombone.
Back to Basics
So we circle back to the classics, “backup & restore”, “detach-move-attach” routines. But there’s a hidden gem of a hybrid here: We can backup, restore as a new name, then switch the names! The catch is, we need to put the original DB in read_only mode while we do this, to prevent new/updated data from coming in. Now, when we ask the Dev-managers if their overlaying apps/websites can handle the DB being in read-only mode for an hour, they’re much more inclined to say OK to that, as opposed to the DB being actually ‘down down’.
Now, there’s a way to do this in T-SQL, and there’s a way to do this in PowerShell. Both are fine! I’m not arguing for one or the other — pick a flavor and go with it. This happened to be a day where I was feeling SQL-ish, not PoSh, hence the flavor I chose.
In my example we have our “slow” storage drives ‘D’ and ‘L’ (for Data and Logs, respectively). We’re moving our blessed database to “fast” flash storage drives ‘E’ and ‘M’ (you know, because they come after ‘D’ and ‘L’). The team managing the dependent apps have given their blessing to have a ‘read-only mode’ maintenance window for the database, as long as the actual ‘downtime’ is mere seconds.
Once the DB is in READ_ONLY, we do a backup,preferably to the flash storage so that it’s faster (and the restore, from flash to flash, will be super-fast!). We then restore that backup to a new DB on the flash storage drives, and, as expected, it’s speedy-gonzales.
Once our two copies of the database are up, we’re ready to take that minuscule down-time. We kick the users out of it (with SINGLE_USER mode), detach both DBs, and re-attach them (create database .. on (file), (file) .. for attach) with different names! Presto-change-oh, the ‘original’ DB is now resident on flash storage, and the old slow files are now a DB called “whatever the heck you want it to be, because it doesn’t matter anymore!”.
Disclaimer: I in no way support nor condone the naming of databases with spaces or special characters. If you actually name a database something like that, no matter how temporarily, I will find you, and throw sporks at your keyboard.
Assuming you didn’t run into any crazy locking problems or sticky-sessions that denied your attempt to force SINGLE_USER mode or detach the DBs, you should be good to go! I even considered building all this into an Agent Job, but… I’m worried it would get tripped up and leave the DBs in a half-arsed state that wouldn’t make our Dev-managers very happy. But hey, that’s what we have test environments for, right?!?