SQL Server for the IT Helpdesk

Know which servers are running SQL Server, monitor the services, and have an escalation procedure for failures.


Today’s post will be much less philosophical and (hopefully) much more practical. This is a topic that I’ll actually be presenting at work to my fellow IT team members, who are mostly all Helpdesk and Infrastructure people, with no Dev or database background. My goal is to explain the basics of SQL Server from an IT infrastructure perspective, and I hope that others find it useful as well as my team! So let’s get to it. Fair warning: lots of abbreviations ahead! But fear not, I will define them – I don’t want to assume any prior knowledge, other than basic IT skills & Windows familiarity.

This might be a bad sign…

I also decided to break this up into 2 posts, because it got pretty lengthy.  So, here goes part 1!

SQL Server is, obviously, Microsoft’s relational database management system, or RDBMS. “Relational” means it’s based on the concept of relationships between entities (or objects, if you will; though mostly we mean tables). In layman’s terms, the relations can be expressed as “is a..”, “has a..”, or “has many..”; for example, a Customer object “has a” Contact-Info object, which probably “has many” Phone#s, Emails, etc. SQL Server is a transactional, fully ACID compliant database system – Atomicity, Consistency, Isolation, Durability. I won’t bore you with the computer science theory stuff; you can read all about it on Wikipedia or your favorite reference site, so let’s move on.

One of the family!

There are a handful of components that make up SQL Server, and I’ll list them in what I think is the “most to least important” order, or at least most to least commonly used. Infrastructure-wise, at its core, SQL Server is basically an “application” and persistent data storage mechanism. It’s a set of processes (services) that run on your server and operate on specific sets/types of files, which we’ll cover in a bit.

First and foremost of said core components is the database engine. This is the DBA’s bread & butter – it’s the service process, sqlservr.exe, which handles all the queries and scripts, manipulates the data files & transaction logs, ensures referential integrity and all that other ACID-y goodness. Like any good service, it is best run under a dedicated service account, especially in a Windows domain environment. Otherwise, it will run as a built in system account, like NT Service\MSSQLSERVER. As a service, it’s set to “Automatic” run-mode and it accepts a number of startup parameters which can help govern and tweak its behavior. Most of those details are outside the scope of this post, so go ask your friendly (or surly, depending on the day) DBA if you’re curious.

It’s an older infographic, sir, but it checks out.

A bit of key terminology: each running database engine is what we call a SQL Server instance. Typically you dedicate one physical (or virtual) server to each instance; however, in rare cases (usually in the lower tier environments like dev/test), you will see multiple instances on one server – this is called instance stacking, and it’s not usually recommended by DBAs due to resource governance complications. You typically reference a SQL Server instance simply by the server name that it’s running on, because the “default instance”, while technically named MSSQLSERVER, does not require any special referencing. So if my SQL server box was named FooBar, I would connect to the SQL instance with the name FooBar. If you get into stacked instances, you have to start naming the other instances uniquely, and your connections now have to include them with a backslash, e.g. FooBar\SQL2, FooBar\SQL3, etc.

As I said, the engine manages the data files. We’re talking about a permanent data storage system, so that data has to “live” somewhere – namely, mdf and ldf files (and sometimes ndf too). The mdf (and ndf) files hold the “actual data“, i.e. the tables, rows, indexes, & other objects in each database. The ldf files represent the transaction logs, or “T-logs” for short, which, again, are a fundamental part of the RDBMS. They allow for and enforce those fancy ACID properties. All of these files are exclusively locked by the sqlservr.exe process, meaning, nothing else is allowed to touch or modify those files but SQL Server itself, while SQL Server is running – not your antivirus, not your backup software (SQL has its own backup methodology), not even Windows itself. The only way to interact with and manipulate those files, or rather, the data within those files, is through the engine.

You can look, but you can’t touch!

Key takeaway: know which servers are running SQL Server, monitor the sqlservr.exe service(s), and work with your DBA to ensure that your AV software excludes the locations of those data files, and that your backup strategy includes & works in conjunction with SQL-based backups.

In a close second, we have the SQL Server Agent, sometimes just “agent” for short. Think of this like Windows Task Scheduler on steroids, but specifically focused within the realm of SQL Server. Like the engine, it runs as a service process – sqlagent.exe– again, typically set to automatic startup and running under a dedicated service account. (There’s a small debate on whether it’s best to use the same service account as the engine, or a separate one; in the end it doesn’t really matter much, so it’s mostly at the discretion of your DBA team and/or the Active Directory admin.)

Shamelessly borrowed from the Database Whisperer

SQL Server Agent is basically free automation – we can use it to schedule all kinds of tedious operations and database maintenance tasks, anything from “run this query every day at 12pm because Mary in accounting runs her big report after lunch and it needs to have data from X, Y, & Z”, to running the hallowed “good care & feeding of indexes & stats” that any DBA will tell you is not only essential to decent database performance, but essential to life itself (or at least, your SQL server’s life). These sets of work are called Agent Jobs (or just “jobs”), which consist of one or more Job Steps. A job has one or more Schedules, which dictate when it runs – such as, again, daily 12pm, or “every Saturday & Sunday at 5pm”, or “every 15 minutes between 8am and 9pm”. Ideally, jobs should be configured to alert the DBA (or, in some cases, the broader IT team) when they fail, so that appropriate action can be taken.

Key takeaway: monitor the service sqlagent.exe and have an alert & escalation procedure in place for handling failed Agent Jobs.

Automation is good, mmkay?

A quick sidebar about transaction logs.  SQL Server uses what’s called a “write-ahead log” methodology, which basically means that operations are literally written to the T-logs before they’re written to the actual data files (MDFs/NDFs).  As stated so eloquently in this blog:

[When] neglected, it can easily become a bottleneck to our SQL Server environment.

It’s the DBA’s job to ensure the T-logs are being properly backed up, maintained, and experiencing good throughput.  And I’ll have another blurb about performance at the end of part 2.  However, as an infrastructure person, it’s worth knowing two things: A) these aren’t your “traditional” logs, like error or event logs; these are fundamental core part of SQL Server.  And B) —

Key takeaway: The disks which house the transaction logs should be FAST.  Like, stupid-fast.  Srsly.  At least at sequential writes.

Yes please, I’ll take several… just charge it to the CIO’s platinum card.

Author: natethedba

I'm a SQL Server DBA, family man, and all-around computer geek.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s