Essential Skills for SQL Server DBAs: Session 3 Resources

For those of you that attended my 3 part eLearning Series on “Essential Skills for SQL Server DBAs” with WindowsIT/SQLMag, here’s a list of additional resources and links for things that I mentioned, alluded to, or wanted to provide additional information on from within session 3.


A key component of some of the things we talked about in this session in terms of performance (as well as in session 1 where we looked at a rough overview of how SQL Server works) was the way in which SQL Server has its own internal scheduler and resource management capabilities – commonly referred to as the SQLOS.

And, if this was something that you want to learn more about, then I recommend taking a peek at Brent Ozar’s free SQL MCM video that goes into the SQLOS in more depth.

Likewise, in terms of being able to interact with the SQLOS via Dynamic Management Objects, I also wanted to provide a link to Books Online which provides documentation of Execution Related Dynamic Management Views and Functions.

Statistics and Indexes

Otherwise, another key thing we addressed in this session was Indexes and Statistics – as these are both critical concerns to addressing SQL Server performance. To that end I wanted to provide a link to a great overview of how SQL Server 2005 uses statistics (even though this information is a BIT dated it’s still very valuable because it’s a good overview of how statistics work – and how you can interact with them).

I also wanted to provide a link to a great series of articles on indexes – that will walk you through the basics and on up into more advanced, how-to, topics for dealing with, interacting with, and managing indexes:’s Stairway to SQL Server Indexes. (And make sure to check out other stairways, these are FANTASTIC resources.)

Then, along the lines of indexes, I also wanted to provide a link to Books Online: Reorganizing and Rebuilding Indexes – as this provides great info about fragmentation and index management/maintenance.

Similarly, I also wanted to provide a link to Michelle Ufford’s spectacular Index Defrag scripts (which, you’ll remember, that I compared to ‘bacon-soaked' marshallows’ – by means of a metaphor comparing index management to fishing/lures).

Statistics Maintenance
In my presentation, I mentioned using sp_updatestats and covered that it does a decent job of updating most (not all) stats with very little effort. (Though just be careful if it’s never been run, as it will take a while). I also included a sample script (using the undocumented special procedure, sp_msForEachDb) that you can use to update stats across ALL of your databases on a single server. And, here’s that script again:

EXEC sp_msForEachDb @command1 = ‘EXEC ?..sp_update_stats’


Additional Performance Resources

One thing I didn't cover in this eLearning event was the role of vendors when it comes to SQL Server. And I skipped over that simply due to time constraints. That said, there are GOBS of great SQL Server tools and services available - from major SQL Server Tools vendors. And, as I opined a long time ago, one of the great things about these vendors is that they're all very keen to be integral parts of the SQL Server community - meaning that they all do a great job of trying to help educate SQL Server users and DBAs on various, core, concerns - instead of focusing solely on just selling products.

So, to that end, I wanted to do something a bit different and provide links to three additional resources that you can use to learn more about SQL Server Performance tuning - from three different vendors of 3rd party SQL Server tools/solutions.

Quest: SQL Server Perfmon Counters Poster
This poster is a great resource - one of the best aggregations of information, documentation, and guidance on how to interpret Windows and SQL Server Performance Counters.

Redgate: Mastering SQL Server Profiler (free eBook)
This free eBook is also a great resource - that provides outstanding insights and details on how to use SQL Server Profiler. (Note the link on this page to the free download, or you can purchase a 'hard' version of this book if you're interested.)

Idera: Essential Performance Tools for SQL Server DBAs
Hmmm. The title of this free whitepaper sounds a bit like the title for this session. And that's probably because both resources had the same author. So, to that end, this free whitepaper does rehash a number of the same things that we covered in this session. It does, however, provide a better overview of some of the tools and resources we looked at today as well.

What’s Next?

Finally, I wanted to close out this event (or my notes for it) by pointing to a link for a great blog post that I bumped into the other day:

The Power of T-SQL’s APPLY Operator

The content in this blog post isn’t what I’d consider being on par with most beginning or newer DBAs. BUT, this post is a fantastic example of just how fun and fulfilling it can be as a DBA to dig into SQL Server performance. That, and this blog post is also a great example of a seasoned veteran of SQL Server showcasing (indirectly) just how fun and exciting it can be to continue learning, discovering, and finding new challenges and insights when working with SQL Server.

Translation: This post is a fantastic example of how being a DBA can continue to be a fulfilling career choice over the years, because in this case, there’s no doubt that Rob Farley (who is a genuine bad-ass) is thoroughly in love with what he’s doing. And I can say the exact same thing about my career choice in terms of SQL Server – it’s been a blast, it’s still a blast, and I anticipate more challenges and opportunities in the future.

Essential Skills for SQL Server DBAs: Session 2 Resources

For those of you that attended my 3 part eLearning Series on “Essential Skills for SQL Server DBAs” with WindowsIT/SQLMag, here’s a list of additional resources and links for things that I mentioned, alluded to, or wanted to provide additional information on from within session 2.

Sizing Databases

During the first part of Session 2 we focused on Server Management (checking logs/etc) and on database management – where we focused on the perils of both shrinking databases and on letting them grow fragmented by letting them grow with the lame, default, sizing options found by default within SQL Server.

To that end, I wanted to share a link to Books Online that covers more details on: Managing Databases.

For more information on database sizing ideas and guidance, you may also want to check out this SQL Server Magazine Article: A Practical Approach to Managing Database Size.

Likewise, I also mentioned that the proper care and feeding of log files can play a role in performance concerns as well. And, to that end, I wrote about that a while back in my article: Maximize Storage Performance.

Additional Resources

During this session I also mentioned some specific resources that I would make available.

Specifically, I called out that I would provide a link to the 2.5 hours of free SQL Server Videos covering backups.

Likewise, I also mentioned that I’d provide a link to an overview of the benefits of RPOs and RTOs – as they apply to SQL Server.

And, I also mentioned that I’d provide a link to my upcoming, free, webcast: SQL Server Disaster Recovery and Prevention (where I’ll be covering problems and concerns around database corruption).

Essential Skills for SQL Server DBAs: Session 1 Resources

For those of you that attended my 3 part eLearning Series on “Essential Skills for SQL Server DBAs” with WindowsIT/SQLMag, here’s a list of additional resources and links for things that I mentioned, alluded to, or wanted to provide additional information on from within session 1.

It’s Good to Be the DBA

During this session, I mentioned some of the benefits of being the DBA, and quoted job and salary surveys from CNN/Money, and – showing that being a DBA can be a rewarding career, has a great outlook in terms of looking like DBAs are going to be around for a while, and salaries aren’t bad either.

I also called out that there were different KINDS of DBAs. Gobs of them. And quoted a list defined by Craig S. Mullins and Brad McGehee. (Brad’s list of different types of DBAs can be found in his applicable chapter in SQL Server MVP Deep Dives – a great book.)

Technical Details / Basics / Fundamentals

During the second part of this session, we took a look at core, or basic, SQL Server concepts that all DBAs should know about. And, after providing a VERY high-level overview of how SQL Server works as a proxy to manage access to your databases/files, we then touched upon how SQL Server uses, well, SQL Server databases, to meet its own needs. And, in that regard, we touched upon SQL Server’s system databases. So, to that end, I wanted to make sure that I provided a bit more depth or detail on some of those databases, and therefore have a couple of links to Books Online:

SQL Server’s System Databases

The Resource Database (I could tell you more, but I’d have to kill you… )

Likewise, the following two links can be great resources to check out if my overview of how SQL Server works was new or something you’re not terribly familiar with:

The SQL Server Database Engine

Managing SQL Server

And, finally, I also wanted to drop off a link basic documentation within SQL Server’s Books Online for Security as well:

SQL Server Security

And, as always, feel free to ping me if you have any questions that weren’t answered during the Q&A portion of this session.

SQL Server Remote Availability Notes

In my Session on SQL Server Remote Availability (you can still signup and watch the video or presentation up to a few months after the presentation was given), I mentioned that I’d provide some additional links and resources in the form of a blog post.

Here those resources and links are.

Remote Availability Basics

In my session on Remote Availability (RA), I covered how it was similar to High Availability – but much more akin to Disaster Recovery in terms of scope or purpose (even though RA typically uses or leverages many of the same tools and techniques as HA solutions).

To that end, I wanted to link to two great articles on Remote Availability – as typically discussed or implied within IT in general. Both come from the same site, TechRepublic and are by the same Author (Mike Talon):

Learn the Pros and Cons of Remote-Availability Systems
Don’t Rely Solely on Remote Availability

The Need for Remote Availability

In case anyone was wondering: Remote Availability IS something that every organization needs – to one degree or another. And that’s simply because it’s impossible to depend upon data in any regard and not have some sort of contingency for how to preserve or recover that data in the case of an outage – even in cases when the outage may be more than just the loss of a component or system – such as when when the outage is at the data-center level.

Michael Otey (blog) recently blogged about the Price of High Availability – and while he was specifically talking about the cost benefits of implementing expensive redundancy and failover processes, the same thing can be said of the cost required to establish some sort of remote availability. Meaning that the price of setting up Remote Availability Solutions may look expensive at first blush – but these costs are nothing in comparison with the COST you’d incur without some sort of viable contingency plan.

And, to that end, one thing that I mentioned in my presentation is that you can get a quick sense for how much it would cost you to lose your data center by talking with the Financial folks in your organization to get a sense for what monthly revenues are. Then just break those values down by day, hour, and minute as needed to get a VERY ROUGH sense of how much an outage would cost you per hour and so on. And with that overly-simplified approach to estimating the cost of potential down-time, you can then get a feel for what kinds of budgets you may need to start looking at in order to defray the costs of actually LOSING operational capacity for any significant stretch of time.

And, once you have a sense for the potential business costs for outages and what kinds of potential budget you might have, then you’re ready to start dealing with RTOs and SLAs.

Recovery Time Objectives (RTOs) and Service Level Agreements (SLAs)

In my estimation tons of Small-to-Medium (SMBs) still run IT departments (or at least SQL Server deployments) without any form of SLAs in place. In my presentation on Remote Availability I covered how one of the BIG ways in which RA differs from HA is in the sense that RA must include much better interaction with management if RA solutions are going to be successful. That’s NOT to imply that HA or DR solutions don’t need communication with management. But, in my experience, even in organizations where there are no SLAs or RTOs, HA and DR are typically addressed (dutifully) by the IT team and ‘implicitly’ understood to be in place, to some degree, by management. Again, I don’t think that’s a recipe for success. I’m just saying that when it comes to address off-site contingencies for disaster recovery (i.e., Remote Availability solutions), you simply can NOT succeed in any way, shape, or form unless management is a willing partner right out of the gate.

Paul Randal (blog | twitter) has done a great job of highlighting the benefit of SLAs – and his expression of how the existence of SLAs show that IT and management are working in unison is a perfect example of what I’m trying to communicate in this regard – especially when it comes to RA solutions.

Likewise, when it comes to RTOs, the big thing to remember is that the key term there is ‘Objective’. Meaning that RTOs are based upon stated objectives – and therefore serve as the basis of helping define SLAs (by serving as core components in establishing MTTRs). Consequently, if you’d like a decent (high-level) overview of what RTOs mean (decoupled from all the business-speak), then I’d recommend the following article by Pierre Dorion as a decent place to start:

How to Establish a Recovery Time Objective (RTO)

SQL Server and Remote Availability

I only spent about 10 minutes or so of my presentation specifically covering how to address RA solutions via SQL Server. (The first 10 minutes or so were on the ‘basics’ and importance of RA, the ‘middle’ 10 minutes were on options and techniques to use with SQL Server to achieve RA solutions, and the ‘last’ 10 minutes or so were on best practices, pitfalls, and things to consider (based on practical experience) when it comes to setting up RA solutions.)

To that end though there were a couple of things I wanted to provide links for.

For starters, there’s the licensing question. If there’s one thing that I typically do NOT touch during presentations it’s Licensing – because that can quickly get ugly in a hurry. However, given the special/awesome nature of how SQL Server supports warm failover servers via licensing, I wanted to make sure that was addressed. To that end I’ve also linked to a Microsoft FAQ that covers this in a bit more detail. And actually, since the FAQ doesn’t provide good anchors, I’m just copy/pasting the text from that FAQ in my blog. (So, if you find this anything after oh, say, 3 days after this post was published, you’ll want to verify that this policy hasn’t changed.)

At any rate, here’s the relevant snippet:

Q. How does licensing work for computers that run SQL Server 2005 in failover scenarios?

A.Failover support, where servers are clustered together and set to pick up processing duties if one computer should fail, is now available in Standard and Enterprise editions of SQL Server 2005. Under each of these editions, keeping a passive server for failover purposes does not require a license as long as the passive server has the same or fewer processors than the active server (under the per processor scenario). For details on which failover methods are available under each edition, visit the SQL Server 2005 Features Comparison page.

More info on licensing can be found here as well.

Otherwise, in my presentation on RA I recommended AGAINST using Replication and Mirroring as options for Remote Availability – given their difficulty and some of the overhead involved with managing these solutions. That said, there are ways in which you might want to address how Replication and Log shipping interact if you’re trying to achieve RA on a Replicated solution by adding Log Shipping into the mix. To that end:

Books Online – Interoperability and Coexistence (of Replication and Log Shipping/Mirroring)

Similarly, even though I don’t think that Mirroring is a good idea (given complexity), if there’s one thing that’s true about SQL Server, it’s that you can find a way to do just about anything. To that end, here’s an article on ‘distance’ mirroring:

Implementing Remote Mirroring and Stretch Clustering

And, with stretch clustering in mind, here are some additional links as well on that topic as well.

Multisite Clustering
Multi-Site Failover Clusters
Additional Resources here and here (note the links to whitepapers/etc at bottom of post)

Otherwise, Log Shipping is actually pretty tame/easy to set up. (To the point where I really need to do a video series on it – to cover the ins/outs, pros/cons, and the step-by-step how-to details of how to set it up and configure/monitor/etc it.)

Confusing Increased Availability with Full Coverage from Disasters

Sadly, I just didn’t have enough time in my presentation to call out something that I like to address when ever talking about High Availability – which is that it’s essential that DBAs and IT Pros don’t confuse High Availability Solutions with Disaster Recovery. Because RA is really just a form of ‘stretch’ or ‘remote’ Disaster Recovery, this concept doesn’t need as much coverage as it normally would when discussing High Availability. None-the-less, if you’ve implemented a Remote Availability Solution, don’t make the mistake of thinking that it means you’re covered. Data Corruption and Loss can come from different sources. RA solutions are essentially geared towards dealing with SYSTEM failure or outages – especially at the site level. But even the best RA solution won’t do you any good if the corruption of your data was caused by a software glitch within one of your applications or if all of your Order Details records were deleted by an end-users ‘by accident’ (or even maliciously).

Consequently, defense in depth is the approach you need to take if your data is important. And I’ve outlined some additional thoughts on ways to protect your data in my article for SQL Server Magazine: Confusing High Availability with Disaster Preparedness.


UPDATE: Woops. I meant to include attribution for some of the photos used in my slides. Rather than cramming in attributions next to each photo in the slide (where no one could really read them), I wanted to list them in my follow-up post. Only, I totally forgot to do that yesterday when I pushed the post. So, without further ado (and to make sure I’m not a full-on cretin), here they are (and thanks to the great photogs who were willing to share these photos for use by hosers like me):

Slide / Photo Attributions

Slide 3 – Session Overview
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide 8 – Primary RA Considerations (Juggling Swords and Chainsaws)
Slide 9 – Constraints: Infrastructure (Bing’s Data center of all places)
Slide 10 – Bandwidth Constraints
Slide 18 – Practical Advice (I’ve actually been in/around Aix-en-Provence)

SQL Server Remote Availability

Ever wonder what your organization would do if you lost an entire data center? Do you already have plans in place to migrate operations to another location?

Later this month I’ll be presenting on SQL Server Remote Availability – how it differs from High Availability, and how to determine whether it’s something your organization needs or not.

Content from this session will primarily be based upon real-world problems and solutions pulled from experiences with my SQL Server consulting clients.

Here’s a copy of the Event’s Abstract:

SQL Server Remote Availability: Problems and Solutions that Work
Are you confident that your data is as protected as it needs to be in order to ensure business continuity even in the face of a disaster? Many organizations today are focused on ensuring that mission-critical data meets stringent service level agreements to ensure high-availability in the case of system or hardware failures. But have you done enough to ensure business vitality in the case of a system-wide failure or the loss of an entire data center?

Join independent SQL Server expert Michael K. Campbell for an informative review and overview of how Remote Availability (RA) differs from Disaster Recovery (DR) and High Availability (HA) solutions. By attending this seminar you'll learn about evaluating the degree of protection that your data needs, and gain valuable insights into how to set up, manage, and test your own RA solutions. You'll also learn about common pitfalls, considerations, and overall best-practices for helping ensure business continuity in the face of potential disasters.

The event is organized by Windows IT Pro / SQL Server Magazine, and is sponsored by Vision Solutions.

It’s FREE and takes place on February 23rd.

Signup / Details

Hope to see you there.

Webcast: SQL Server Myths Debunked... Or Are They?

This post is a follow-up for my webcast today – focusing on myths, debunks, and the need (in some cases) to ‘re-bunk’ or re-examine aspects of common myths and their rebunks.

If you missed the presentation, you can register here and watch it for free.

A Note About the SQL Server Community

One unfortunate side effect of this webcast/presentation is that I initially was going to call it “Re-Bunking SQL Server Myths” – only using a made-up word like ‘re-bunk’ in the title of a Webcast can prevent you from reaching potential segments of your audience. So, as I mention in this webcast, I decided to change the title to “SQL Server Myths Debunked… Or Are They?” – which seemed to convey the same general idea.

Only, that led to some confusion where it looked like the purpose of this session was to attack Paul Randal (blog | twitter) which just wasn’t the case. Happily, he knew that a mob with pitchforks was bearing down on me, and was nice enough to warn me in advance.

I explained this a bit more fully in the webcast itself by way of a long-winded introduction, but thought that now might also be a good time to trot out an article that I wrote long ago for SQL Server Magazine, about how great the SQL Server Community is. (And I’m trotting this article out because it summarizes my thoughts about how open the community is – and showcases how I just couldn’t be comfortable attacking someone else to make myself look good – it’s just contrary to how I operate.)

Paul Randal’s Debunking Series

Otherwise, if you watch or watched this webcast, you’ll see that I held up Paul’s debunking series as kind of the ‘hallmark’ for how I think debunking should transpire. As I mentioned in the webcast, what I like about Paul’s approach (above and beyond just the huge number of myths that he’s tackled) is that he not only lays down the facts as part of his debunk, but he also does a great job of calling out the caveats or things you should also watch out when stumbling into ‘myth land’.

- Myths and misconceptions: 60 page PDF. (Highly rated. Download the PDF and put it on your smart phone/etc.)
- Misconceptions about … Everything (Same as previous link, but you can read each post online (and be sure to check out the comments)).

Shrinking Files

A big part of this presentation was on cases when regularly shrinking files could make sense. If that sounds craaaaazy (and it should), then go check out the webcast.

The key point of this part of the webcast though was: when you understand the costs, liabilities, and potential pitfalls of various SQL Server operations and techniques (such as shrinking files), then you’re actually in a better position to be able to evaluate the effectiveness of those operations as an OPTION in cases where you might not normally consider them.

Or, more specifically, because SQL Server is so flexible, you can actually run into plenty of cases where what is typically a WORST PRACTICE can become a tolerable or acceptable or even a BEST practice under certain scenarios – because knowing what, when, why, and how to use various features and capabilities is all about understanding the trade-offs.

Consequently, there ARE a couple of cases where regularly shrinking your data files can not only make sense – but become a best practice.

Table Variables

Another topic I covered was how there’s a myth out there that table variables are MAGICAL in that they only live in memory. That myth has been debunked a number of times – as it should have been.

Only, my ‘rebunk’ in this part of the session is that many debunks of this myth can almost leave people with the idea that there are really no benefits that come with using Table Variables over temp tables.

And that’s just not the case. In fact, as I said in the webcast, “Table Variables are ALWAYS a better option to use – unless they’re not”. Which means that Table Variables have less locking and transactional overhead, they also lead to fewer recompiles within sprocs, and they give the added benefit of making it easier to view Estimated/Actual Execution plans (because you’ll get an error when a sproc/etc has a temp table defined within it – but you won’t when using Table Variables).

Multiple Files

Sadly, I blew right past this slide in my presentation due to time constraints/considerations. But, that caused a number of people to ask about this one in the Q&A session.

So, here’s the short story: Sadly, there’s a myth that has cropped up about the use of multiple files as they apply to USER databases. Somehow, this myth appears to be related to the PSS recommendation for SQL Server 2000 that tempdb be configured/provisioned with a file per processor (or core) to help mitigate issues with allocation contention. (Paul Randal does a great job of debunking the core of this myth here.)

Only, the issue I wanted to address is how I’ve seen this myth ‘morph’ into cases where I’ve actually seen people try to do the SAME thing (i.e. one file per processor/core) for user databases. Likewise, I’ve also seen ‘anti-myths’ from the debunk of this myth where I’ve had people argue with me about using MULTIPLE files for performance reasons. (Or, in other words: I’ve had people tell me that using multiple files doesn’t provide any performance benefits UNLESS we’re talking about tempdb – and that’s just not true.)

So, the rebunk I wanted to focus on in this regard was the fact that using multiple files (or multiple FILEGROUPS) CAN be a way to boost performance – in cases where the files are on DIFFERENT physical spindles (i.e., disks, volumes, LUNs) and where the operation in question can already be parrallelized.

In fact, SQL Server Books Online calls this out as a way to boost performance. And the idea is that if you create a new FILEGROUP for say, non-clustered indexes, and put them on a different set of spindles than your clustered indexes (i.e., than your tables), then you can get perf boosts in the form of quicker lookups – in SOME cases. Or, another common adoption of this technique is to put heavily used tables (that are commonly used in heavy or repetitive JOINs) on a different FILEGROUP from your core tables. This, in turn, can allow for better parrallelization.

And, now that I think about it… I’ve actually blogged about this – and expect it to show up on Idera’s SQL Server Performance Blog in the next few days (or week) – so keep an eye out for more details there if you’re interested (or just shoot me an email).

The point though, is that a casualty of the myth surrounding the ‘tempdb’ myth is that people, somehow, occasionally get confused about the SCOPE of that single best practice, and lose sight of SQL Server internals – and how SQL Server really performs when it comes to SMP/Parallel operations – which is what I wanted to ‘re-bunk’.

AWE / Locking Pages in Memory

I probably should have just drawn up a matrix for this one.

AWE is good. Period. It allows SQL Server to access ‘locked’ memory in fast/efficient ways.

On 32-bit systems, AWE is required to be able to address more than 2GB of Physical RAM. (There are some other techniques that I see people use – such as PAE and the 3GB switches in the Boot.ini file – but I prefer to just use AWE as it’s cleaner/better/faster/easier.) For more details on AWE and 32-bit systems, see my article: Enabling SQL Server AWE Memory

That said, there are tons of myths out there about how AWE works, which platforms it’s required on and so fort – to the point that you’ll commonly see (in forums/etc) people say that AWE _MUST_ be configured on 64-bit systems as well. Happily, that’s just not true. Instead, 64-bit systems will let SQL Server access as much RAM as either the version/edition of SQL Server can handle or as much RAM as your Operating System (or server) will allow you. And it does all of this without the need for any additional configuration.

But remember how AWE ‘locks pages in memory’? Well, that can provide an extra performance boost – even on 64-bit systems. And there are really two reasons why. First, locked memory is less-volatile memory – the OS doesn’t have to ‘hover’ over this memory and keep tabs on who or what is using it – which reduces management overhead. Second, locked memory is accessed via AWE APIs – which were optimized for 32-bit machines – meaning that you’ll be picking up optimized overhead in the form of working with, effectively, smaller registers.

So, if you positively HAVE to squeeze every ounce of performance out of SQL Server, then locking pages in memory on 64-bit systems can make a lot of sense as a way to pick up an additional boost. (That said, I’m still going to recommend that you TEST performance before and after – to make sure you’re not bumping into an edge case where non-AWE might be faster ... because that can happen.)

Adding External References to SQL CLR Projects

A common need when working with SQL CLR assemblies (or projects) is to add a reference to a third-party (or external) assembly. Interestingly enough though (at least as far as I can tell), the steps for doing this are hardly ever documented.

So this blog post will fix that – by documenting just how you go about adding External References in Visual Studio to SQL Server CLR projects or assemblies.


In my recent webcast: Introduction to SQL CLR Extensibility, I was hoping to cover exactly how to do this. I was also going to (and partially did) throw out a few caveats when doing this.

Specifically: don’t go crazy adding external references. Good code IS reusable code – so good developers may occasionally run into the need to drop of ‘core’ logic into their CLR assemblies. On the other hand, lots of complex business logic just isn’t going to find a good home in your SQL Server’s memory-space; bundling up tons of business rules and the likes into SQL CLR assemblies (or as referenced assemblies) is a great way to bloat your CLR memory usage and, more importantly, will make versioning your code PAINFUL if you start to abuse this power.

Setting the Stage

For this tutorial, I’m going to assume that you’ve already GOT a CLR project/assembly that you’ve already deployed successfully to SQL Server – and that you’ve got it running as needed (minus the external references). If you don’t have that, then watch the webcast from the link provided up above, and it will show you how to get to this ‘assumed’ point.

To that end, if you’re an experience .NET Developer, your first instinct is to right-click on the References folder in Solution Explorer within your solution, and click on the Add Reference option. That in turn brings up the following:


The only problem, of course, is that it doesn’t have any ‘browse’ options or tabs. It does let you reference other projects – but even that has some serious limitations too – to the point where even if you were to add your existing ‘solutions’ as projects and try to build them and reference them, you won’t be able to do so.

What’s Going On?

The key to adding external references to a SQL CLR project is understanding that the ‘Add References’ dialog isn’t showing for what’s on YOUR development machine. Instead, it’s polling the SQL Server you’ve CONNECTED TO as part of your project. So, if you’re a developer you’re used to Visual Studio letting you add references to projects and DLLs that exist on YOUR workstation. Then, when you deploy or publish your code, those references are bundled along with your code and copied out to the destination.

SQL CLR projects DO NOT work that way. Instead, they ONLY let you reference assemblies that are already located on the server and which have already been REFERENCED with the server.

Once you know that, the rest is easy.

Adding Reference Assemblies to Your SQL Server

So, since you can only add ‘registered’ assemblies to your projects, the next question is how do you do that?

In this tutorial, I’m going to start with a brand NEW assembly. To do that I’m going to do the following:

Open Up a Brand New Instance of Visual Studio (I’ve already got my existing Solution open).

Create a New Project.

I’m going to Specify the Visual C# Type, and specify that I want to create a Class Library:


I’m going to name this Class Library ‘CoreLogic’:


Then, in that Class Library, I’m just going to add in some ‘core’ logic that I want to be able to ‘consume’ from my referencing assembly.

In this case, I’m just going to use some patently lame logic – which flips a string around (i.e. it turns ‘string’ into ‘gnirts’):

Then, at that point, I’m going to BUILD my ‘CoreLogic’ project, and then copy the CoreLogic.dll that came from the \bin\Debug\CoreLogic.dll folder within my project out to a location on my server.

From here I can then register the ‘CoreLogic.dll’ on my server – and it will then be available as a potential reference from within my other solution.

To register my assembly, I just need to use CREATE ASSEMBLY as follows:


And note that I’m creating this assembly in the DATABASE that I’ve connected to in my SQL CLR project (i.e., the project where I want to be able to add the external reference).

Full Circle – Adding the Reference to your CLR Project

Now that I’ve created the assembly up on my SQL Server, if I go back to the original SQL CLR project where I want to be able to add a reference to my ‘Core Logic’:


I’ll see the reference to CoreLogic show up if I again right click on the References node and select Add Reference:


From here I’m able to then add a using statement and so on – just as you would expect with a ‘normal’ .NET project.


Depending upon how you’re deploying your SQL Server CLR Assembly or Project, you may run into some ugly issues when you now try to push up your new assembly – and it has references to external resources.

The best way to deal with this in my experience is just to clean house (i.e., destroy your CLR assembly) and then re-register it.

To do this:

1) DROP any sprocs, udfs, triggers, types, etc that DEPEND upon YOUR CLR Assembly.

2) Issue a DROP ASSEMBLY MyAssemblyNameHere command – and you’ll now free up all the ‘references’ that this assembly thinks it has.

3) (With your external assembly already registered/loaded) Issue a CREATE ASSEMBLY MyAssemblyNameHere command – or push ‘Deploy’ from Visual Studio again – and it will push your CLR Assembly up to the server.

4) Recreate any of your sprocs, udfs, triggers, etc that you nuked in step 1.

Introduction to SQL CLR Extensibility

This blog post provides additional resources and links for my Introduction to SQL CLR Extensibility webinar.

Background / Overview

There are a number of decent resources out there that introduce SQL CLR.

Greg Low’s SQL Server CLR Integration book is a great, high-level, overview.

Bob Beauchemin’s A Developer’s Guide to SQL Server 2005 provides some fantastic, low-level, info. (And I wouldn’t worry much about it being for SQL Server 2005).

I’ve also found that Derek Comingore’s and Douglas Hinson’s Professional SQL Server 2005 CLR Programming does a nice job of covering details along with providing a ‘soft’ introduction.

Sadly, I haven’t found too many online articles that do a great job of introducing SQL Server CLR, but one from SQL Server Magazine from a while back does serve as a great overview/introduction to SQL Server CLR.


DBAs worry about the performance overhead of the CLR. The tricky news is that the CLR is already LOADED into your SQL Server. It’s sitting there, in memory, just waiting to run your code. So, in that regard, just as you can run ‘dumb’ queries through the ‘T-SQL’ engine, you can run ‘dumb’ queries through the ‘CLR’ as well.

Moreover, it’s important to point out that the CLR can hands-down outperform T-SQL in a LARGE number of cases. Knowing whether or not it’s faster or not really depends upon you doing a thorough job of testing performance. Using estimated and actual execution plans to profile your performance is a good start – but since those execution plans LIE, you’ll want to make sure that you’re testing ALL implementations through SQL Server Profiler – so that you get an accurate feel for what’s up. (And I cover this in a bit more detail within the presentation linked at the top of this post.)

And, when it comes to performance, one thing you may find is that when using the CLR you’ll need to do lots of ‘SCANNING’ functionality. Scanning, of course, is a LESS performant (sometimes HORRIBLE) option. But, in some cases you may NEED to scan with increased capabilities that the CLR offers (to parse out business rules, handle special formatting, and so on). In cases like this, don’t abandon the CLR if you find that it’s scanning. Instead, try to use SQL Server intelligently by forcing it to SLICE data with index seeks, and the ‘DICE’ data with the CLR. That way you can get the best of ‘both worlds’ – in the sense that you can balance SQL Server perf benefits with CLR extensibility/flexibility benefits.

I actually wrote about this a while back on DevProConnections – and the approach outlined there works for both CLR and T-SQL queries/approaches where you need to generate high-performance results:

Generating High-Performance SQL Server Query Results

For additional information about just how well the CLR can handle itself from a performance perspective, as well as in terms of things to watch out for (given the fact that strings in the CLR are immutable), make sure to check out a recent blog post from Aaron Bertrand (where he talks about realizing the benefits of SQL Server CLR performance).

His post, in turn, points to a post (which, in turn, points to another post) by Adam Machanic that goes very heavily into issues with optimizing SQL Server CLR performance as well:

SQLCLR String Splitting Part 2: Even Faster, Even More Scalable

Faster, More Scalable SQLCLR String Splitting


Finally, I’m including a link to the demos I showcased in this presentation.

Download: Introduction to SQL CLR Extensibility Demo (.zip)

The demos are available as a .zip file – which includes the slides for this presentation (as a .pdf), and a ‘Demos’ folder where I’ve got the scripts I used during the demos, and where I’ve also included both of the VS 2008 Projects that I used in this presentation. (Well, actually, I only got around to using the first project – but I’ve outlined the use of the second project in this blog post – where I show how to add external references in SQL CLR Assemblies.)

Hiding Databases from Public and Other Users

In my recent presentation on Common SQL Server Security Mistakes, a question was asked about what to do with the public role – from a security standpoint.

My response was that, in the majority of cases, the public role isn’t something that you’ll need to worry about. But, that since the public role does ‘leak’ or ‘disclose’ information about objects and databases on a given server, that can be a problem in a very small minority of environments. I therefore recommended, that in those (typically extreme) edge-cases, that removing the public role would make sense – only that if you do undertake such an endeavor, you should watch out for potential side effects.

In this post I’m going to clarify that answer – and provide some examples.


As a point of clarification, you can’t remove public at all. You can’t remove the public role from your server, nor can you remove it from your databases. Likewise, you can’t remove logins or users from membership in the public role. (Every once in a while you’ll see questions on forums from someone who has MANAGED to do this – but it effectively breaks SQL Server and isn’t supported. Not sure how people manage to do this either – and don’t care, because i’m NOT advocating it.)

Being a member of the public role is a requirement for working with SQL Server.

So, when I mentioned that you might want to REMOVE the public role in some cases, what I actually SHOULD have said was that: in some cases, you might want to remove the VISIBILITY that the public role grants; but, in doing so, just be forewarned that in some cases that MIGHT cause problems with some logins in some very goofy edge cases.


The best way to get a sense for what I’m talking about is to look at some concrete, step-by-step, examples.

For these examples, let’s assume the following:

  • You’ve got a highly sensitive database called Customers.
  • You have OTHER critical databases on the same Server.
  • You have a web application – that connects to the server’s Customer database via a Login called Web_App[1].
  • You also have some logins for employees within your organization – who need to access the server for regular reports or something. We’ll call one of these user Wilson[1].

For these examples, let’s say that you’ve already done due-diligence from a security standpoint – and both the Wilson and the Web_App logins have been mapped as users into the Customers database with minimal permissions – such as being granted db_datareader, db_datawriter and/or the the ability to execute a couple of trusted stored procedures.

Under a scenario like this, if Wilson were to log into SQL Server Management Studio against your server, his membership in the public server role (all logins are made members of the public role at the server level) would grant him a view of all databases, like so:


And, since the Web_App login is also a member of the public server role, if hackers were able to compromise your web app, and gain access to the server (i.e. via SQL Injection), they’d be able to do:


And get the following results:


In other words, they’d be able to see the same things that Wilson (or any other login on your server) would see.

In most cases, this is not an issue. If it is an issue, you’ve got two options for dealing with this.

First, you can remove public’s ability to SEE all databases on the server. To do this, just run the following query:


This operation denies all logins (since all logins are members of the public role) the ability to see databases on the server that they don’t OWN. Which means that users that are mapped as db_owner will still see ‘their’ databases within the Object Explorer or if they query sys.databases. (And since the SysAdmin, or sa, account is automatically granted db_owner in all databases, the SysAdmin account will still see everything.)

But, for logins like Wilson or Web_App which are NOT members of db_owner, what they can see is drastically limited, as per the following screenshot:


As you can see, in this case, Wilson isn’t able to see anything – other than the master and tempdbs. In fact, even though Wilson has permissions in the Customers database, he’s still not able to see the database in the Object Explorer. He is, however, still able to log in, and can execute the same queries against the Customers databases as before. He’s just lost the ability to ‘see’ the VIEW for ANY database.

Likewise, if the Web_App user issues the same query against sys.databases as before, they’re only able to see the following:


In this case, there IS an entry for the Customers database – but that’s just signifying permissions in the database – it doesn’t indicate the ability to peruse the schema of that database. In fact, take a look at the results of the following query in terms of what’s returned (apparently just a bunch of Service Broker objects that apparently didn’t get the memo on ‘lock down’ against public ‘disclosure’):


A Less Aggressive Approach

Denying VIEW DATABASE to public is a pretty heavy-handed approach. It’s an all or nothing proposition.

So, if you try that and it doesn’t work, you can easily ‘undo’ it with the following:


More importantly, you can implement the same safegaurds at a much more granular level – meaning that if you want to limit the information disclosed should hackers manage to compromise your web application (through SQL Injection or something else), you can just use the following:


And this will then let Wilson and all other logins on the system continue as before – but the Web_App login will be restricted in terms of what schema it can see or view.

TIP: If you end up locking down an individual user account (or role) so that it can’t see all databases in the Object Explorer, do your users a favor and make sure that the default database specified in their login is the database that they can use. Otherwise they’ll have to do a USE databaseNameHere query just to get started with things – as they can no longer right-click on the database in question from the Object Explorer and select the New Query option.


Best Practices

Ultimately, hiding databases from members of the public role is something you really don’t need to worry about in most environments. Though there’s no harm in restricting application logins or other publicly-facing accounts in this way. In fact, it’s a best practice to make sure that any applications have their logins RESTRICTED with DENY VIEW ANY DATABASE TO <appLoginNameHere> once the app is up and running. It’s a simple, added, extra precaution that helps contribute to defense-in-depth by mitigating any potential issues with information disclosure.

Of course, if you’re working in an environment where SQL Injection is a possibility (and sadly, there are plenty of apps out there where SQL injection IS a possibility, but the app can’t be immediately taken offline and fixed like it should be), then using this approach can be part of a mitigation strategy. It is not, however, a sufficient protection. (All it can do, when combined with granting applications least-privilege access to ONLY the resources they need, is mitigate the scope of damage hackers can/will inflict once they compromise your application.)

And, in highly sensitive environments where employees need to be denied access to even knowing about the existence of certain databases, using the approaches outlined here can be an easy way to accomplish those goals. I’d just recommend going with the least heavy-handed approach possible (by blocking individual logins instead of the public role) when feasible.

Otherwise, just be aware that in some edge-cases if you DENY VIEW ANY DATABASE to some users, they’ll actually encounter full-blown login problems in some goofy cases (I swear I’ve seen some with SQL Server Reporting Services databases) where SSMS won’t like this restriction. Though, problems with this should be ‘once in a blue moon’ problems – which is further reduced by the fact that implementing the suggestions outlined here really don’t apply to most environments. 

[1] I’m using SQL Server Authentication in these examples (and with STRONG (i.e. LONG) passwords this is secure enough – but in really highly-sensitive environments, you’re better off using trusted security).

Common SQL Server Security Mistakes

In my Presentation: Common SQL Server Security Mistakes (link), I mentioned a number of different resources for additional learning.

Here they are (and I may update these over time).


Core SQL Server Security / Background
43% of companies surveyed in 2009 had lost data to hackers


The Seven Steps to Successful SQL Server Auditing


David Penton’s Not-so Dynamic Sql (SQL Injection)

Least Privilege

SQL Server 2005 Security Best Practices - Operation and Administrative Tasks

Process Monitor (great for troubleshooting when de-privileging service accounts)

SQL Server Credentials and Proxies

I also have a script available which you can use to create an sp_execproc database role which is a great option for forcing applications to only use sprocs for additional security/lockdown. Eventually I’ll provide a blog post on how to use that (and how to address some of the limitations). But if you’d like that script, just ping me at mike at

Protecting SQL Server Data

Here’s a quick collection of additional links and resources for folks who followed along with my Protecting SQL Server Data presentation today.


First and foremost (of course) I’ve got a couple of videos for anyone interested in learning more about the basics of backups, recovery, logging, and so on. Likewise, I’ve also got some videos on best practices for when it comes to implementing disaster recovery plans, and a number of how-to videos that will walk you through the process of implementing actual backups and working through restore and recovery operations:
Backups Demystified
SQL Server Logging Essentials
SQL Server Backup Best Practices
Restoring Databases with SQL Server Management Studio
The Full Series of 15 Free Video Tutorials on SQL Server Backups


Above and beyond the basics, we covered a lot of options and details in our session today – and if you’d like to learn a bit more about options when it comes to making your data more redundant, go ahead and give these links a stab:

Failover Cluster Documentation

SQL Server 2008 Failover Clustering Whitepaper

Denny Cherry's SQL Server 2008 Clustering Articles in SQL Magazine

Log Shipping:
Log Shipping Overview

Step By Step to Configuring Log Shipping

Log Shipping Fail Over

Log Shipping and Additional Dependencies

Database Mirroring:

Database Mirroring Overview

Providing High Availability with Database Mirroring

Implementing Failover with Database Mirroring

Performance Best Practices for Database Mirroring

Using Database Mirroring AND Log Shipping Together

Combining Log Shipping and Replication

Log Shipping for Reporting Purposes
Think house-boat: Log Shipping is good for HA OR Reporting. Don't try to blend both into same solution.

Books Online Overview

Management of Replication Agents
This is where the BULK of your more ADVANCED performance and configuration options will be.

Great Resource for Replication Troubleshooting


Finally, if you had some questions that weren’t answered during our brief session today, or if you think of some questions in the next few days, feel free to ping me. I’m always happy to help folks out with questions – as it’s a form of marketing and just a general way for me to stay abreast of the kinds of problems organizations are struggling with. So, if you’d like to ask a couple of questions (or if you’d like to talk to me about a potential engagement), make sure to head over to my SQL Server Consulting site or visit my contact page to drop me a line.