SQL Server Remote Availability Notes

In my Session on SQL Server Remote Availability (you can still signup and watch the video or presentation up to a few months after the presentation was given), I mentioned that I’d provide some additional links and resources in the form of a blog post.

Here those resources and links are.

Remote Availability Basics

In my session on Remote Availability (RA), I covered how it was similar to High Availability – but much more akin to Disaster Recovery in terms of scope or purpose (even though RA typically uses or leverages many of the same tools and techniques as HA solutions).

To that end, I wanted to link to two great articles on Remote Availability – as typically discussed or implied within IT in general. Both come from the same site, TechRepublic and are by the same Author (Mike Talon):

Learn the Pros and Cons of Remote-Availability Systems
Don’t Rely Solely on Remote Availability

The Need for Remote Availability

In case anyone was wondering: Remote Availability IS something that every organization needs – to one degree or another. And that’s simply because it’s impossible to depend upon data in any regard and not have some sort of contingency for how to preserve or recover that data in the case of an outage – even in cases when the outage may be more than just the loss of a component or system – such as when when the outage is at the data-center level.

Michael Otey (blog) recently blogged about the Price of High Availability – and while he was specifically talking about the cost benefits of implementing expensive redundancy and failover processes, the same thing can be said of the cost required to establish some sort of remote availability. Meaning that the price of setting up Remote Availability Solutions may look expensive at first blush – but these costs are nothing in comparison with the COST you’d incur without some sort of viable contingency plan.

And, to that end, one thing that I mentioned in my presentation is that you can get a quick sense for how much it would cost you to lose your data center by talking with the Financial folks in your organization to get a sense for what monthly revenues are. Then just break those values down by day, hour, and minute as needed to get a VERY ROUGH sense of how much an outage would cost you per hour and so on. And with that overly-simplified approach to estimating the cost of potential down-time, you can then get a feel for what kinds of budgets you may need to start looking at in order to defray the costs of actually LOSING operational capacity for any significant stretch of time.

And, once you have a sense for the potential business costs for outages and what kinds of potential budget you might have, then you’re ready to start dealing with RTOs and SLAs.

Recovery Time Objectives (RTOs) and Service Level Agreements (SLAs)

In my estimation tons of Small-to-Medium (SMBs) still run IT departments (or at least SQL Server deployments) without any form of SLAs in place. In my presentation on Remote Availability I covered how one of the BIG ways in which RA differs from HA is in the sense that RA must include much better interaction with management if RA solutions are going to be successful. That’s NOT to imply that HA or DR solutions don’t need communication with management. But, in my experience, even in organizations where there are no SLAs or RTOs, HA and DR are typically addressed (dutifully) by the IT team and ‘implicitly’ understood to be in place, to some degree, by management. Again, I don’t think that’s a recipe for success. I’m just saying that when it comes to address off-site contingencies for disaster recovery (i.e., Remote Availability solutions), you simply can NOT succeed in any way, shape, or form unless management is a willing partner right out of the gate.

Paul Randal (blog | twitter) has done a great job of highlighting the benefit of SLAs – and his expression of how the existence of SLAs show that IT and management are working in unison is a perfect example of what I’m trying to communicate in this regard – especially when it comes to RA solutions.

Likewise, when it comes to RTOs, the big thing to remember is that the key term there is ‘Objective’. Meaning that RTOs are based upon stated objectives – and therefore serve as the basis of helping define SLAs (by serving as core components in establishing MTTRs). Consequently, if you’d like a decent (high-level) overview of what RTOs mean (decoupled from all the business-speak), then I’d recommend the following article by Pierre Dorion as a decent place to start:

How to Establish a Recovery Time Objective (RTO)

SQL Server and Remote Availability

I only spent about 10 minutes or so of my presentation specifically covering how to address RA solutions via SQL Server. (The first 10 minutes or so were on the ‘basics’ and importance of RA, the ‘middle’ 10 minutes were on options and techniques to use with SQL Server to achieve RA solutions, and the ‘last’ 10 minutes or so were on best practices, pitfalls, and things to consider (based on practical experience) when it comes to setting up RA solutions.)

To that end though there were a couple of things I wanted to provide links for.

For starters, there’s the licensing question. If there’s one thing that I typically do NOT touch during presentations it’s Licensing – because that can quickly get ugly in a hurry. However, given the special/awesome nature of how SQL Server supports warm failover servers via licensing, I wanted to make sure that was addressed. To that end I’ve also linked to a Microsoft FAQ that covers this in a bit more detail. And actually, since the FAQ doesn’t provide good anchors, I’m just copy/pasting the text from that FAQ in my blog. (So, if you find this anything after oh, say, 3 days after this post was published, you’ll want to verify that this policy hasn’t changed.)

At any rate, here’s the relevant snippet:

Q. How does licensing work for computers that run SQL Server 2005 in failover scenarios?

A.Failover support, where servers are clustered together and set to pick up processing duties if one computer should fail, is now available in Standard and Enterprise editions of SQL Server 2005. Under each of these editions, keeping a passive server for failover purposes does not require a license as long as the passive server has the same or fewer processors than the active server (under the per processor scenario). For details on which failover methods are available under each edition, visit the SQL Server 2005 Features Comparison page.

More info on licensing can be found here as well.

Otherwise, in my presentation on RA I recommended AGAINST using Replication and Mirroring as options for Remote Availability – given their difficulty and some of the overhead involved with managing these solutions. That said, there are ways in which you might want to address how Replication and Log shipping interact if you’re trying to achieve RA on a Replicated solution by adding Log Shipping into the mix. To that end:

Books Online – Interoperability and Coexistence (of Replication and Log Shipping/Mirroring)

Similarly, even though I don’t think that Mirroring is a good idea (given complexity), if there’s one thing that’s true about SQL Server, it’s that you can find a way to do just about anything. To that end, here’s an article on ‘distance’ mirroring:

Implementing Remote Mirroring and Stretch Clustering

And, with stretch clustering in mind, here are some additional links as well on that topic as well.

Multisite Clustering
Multi-Site Failover Clusters
Additional Resources here and here (note the links to whitepapers/etc at bottom of post)

Otherwise, Log Shipping is actually pretty tame/easy to set up. (To the point where I really need to do a video series on it – to cover the ins/outs, pros/cons, and the step-by-step how-to details of how to set it up and configure/monitor/etc it.)

Confusing Increased Availability with Full Coverage from Disasters

Sadly, I just didn’t have enough time in my presentation to call out something that I like to address when ever talking about High Availability – which is that it’s essential that DBAs and IT Pros don’t confuse High Availability Solutions with Disaster Recovery. Because RA is really just a form of ‘stretch’ or ‘remote’ Disaster Recovery, this concept doesn’t need as much coverage as it normally would when discussing High Availability. None-the-less, if you’ve implemented a Remote Availability Solution, don’t make the mistake of thinking that it means you’re covered. Data Corruption and Loss can come from different sources. RA solutions are essentially geared towards dealing with SYSTEM failure or outages – especially at the site level. But even the best RA solution won’t do you any good if the corruption of your data was caused by a software glitch within one of your applications or if all of your Order Details records were deleted by an end-users ‘by accident’ (or even maliciously).

Consequently, defense in depth is the approach you need to take if your data is important. And I’ve outlined some additional thoughts on ways to protect your data in my article for SQL Server Magazine: Confusing High Availability with Disaster Preparedness.


UPDATE: Woops. I meant to include attribution for some of the photos used in my slides. Rather than cramming in attributions next to each photo in the slide (where no one could really read them), I wanted to list them in my follow-up post. Only, I totally forgot to do that yesterday when I pushed the post. So, without further ado (and to make sure I’m not a full-on cretin), here they are (and thanks to the great photogs who were willing to share these photos for use by hosers like me):

Slide / Photo Attributions

Slide 3 – Session Overview
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide 8 – Primary RA Considerations (Juggling Swords and Chainsaws)
Slide 9 – Constraints: Infrastructure (Bing’s Data center of all places)
Slide 10 – Bandwidth Constraints
Slide 18 – Practical Advice (I’ve actually been in/around Aix-en-Provence)

Recent Links and Resources

Despite a lull of activity on this blog during the past few months, I was actually fairly busy. (And I’m also formulating some actual plans to create new videos here in the next few months.)

SQL Server Performance Resources

During December and January I provided three ‘guest’ posts for the SQL Server Performance Blog managed by idera. The posts were:

Using Multiple Files
The benefits of using multiple files is one thing that constantly causes some confusion among DBAs – largely because of a couple of bits of bad advice and some myths that have made their way into the collective SQL Server consciousness. That said, leveraging multiple data files can be a great way to pick up performance improvements – and this post outlines some of those use-cases along with potential pitfalls. [link]

Don’t Forget Application Caching
Caching continues to be one of the best ways to improve performance. And while application caching is typically outside the realm of what many SQL Server Performance tips focus on, failure to address caching opportunities at the application level is a great way miss out on huge potential performance improvements. [link]

Removing Unused Indexes
How many indexes are too many? When should you remove an index? Those are all tough questions that require a lot of  different considerations. I’ve outlined many of them in this post – but not all (check out, for example, the 2nd comment on this post that links to an article by Rob Farley touching on how DMVs might provide erroneous information.) [link]

White Paper: Essential Performance Tools for SQL Server DBAs
During December I also authored a white paper for idera (that was published in January) – designed to to help new/reluctant DBAs get started with SQL Server Performance tuning by exposing them to some of the key tools available for performance monitoring and tuning used by the pros.

This paper can be found on idera’s Resources page (in the Whitepapers section, entitled: Essential Performance Tools for SQL Server DBAs).

Expert SQL Server Recommendations for the New Year

The new year is already well under way, but if you missed Megan’s blog post on recommendations from SQL Server Experts, you’ll want to check out some of the advice offered by various SQL Server Editors and Authors listed on her post. Listed is some great advice on database administration, BI/DW tips, insights on what to do about the cloud, and lots of similar advice on the importance of making sure that your data is accurately backed up.

And, if you’re not sure about your backups, then you really need to GET sure about them. And if you need a place to start, I would (obviously) recommend a series of great, free, videos that can help you get started with the core concepts, considerations, best practices, and even step-by-step details of how to get going with your own SQL Server Backups.

DBCC TRACEON – Follow-up

In my recent post on using DBCC TRACEON() in conjunction with setting startup parameters, I pointed out that you can frequently avoid the need to restart a SQL Server immediately by specifying the startup flag/parameter necessary and then just executing DBCC TRACEON(####, -1) with the same parameter.

In most cases I’ve found that that works very well.

When DBCC TRACEON() Doesn’t Work

In some cases though, you’ll see that when you take such an approach, that the trace flags that you set will occasionally ‘not-take’ or won’t ‘stick’ after you close your connection/etc. (Or, the same thing goes with DBCC TRACEOFF() – where you try to remove a trace flag – and it doesn’t ‘stick’.)

First and foremost, if you see that your changes aren’t sticking, then make sure you executed DBCC TRACEOFF/TRACEON with the –1 second argument – which forces the statement global. (I didn’t do as good of a job as I should have with my previous post.) And if you’re sure you did that, then you may have run into one of those use-cases where DBCC TRACEON/TRACEOFF isn’t working as you would expect.

Sadly, this behavior is sort of by design. And, honestly, given how complex the code is that must be putting these trace flags into play, it’s not too terribly surprising that trace-flags can be a bit difficult to make ‘stick’ on a busy production system (which is mostly where I’ve noticed problems with trace flags not sticking past the end of a connection/etc.).

More importantly though a key thing to note from Books Online when it comes to DBCC TRACEON() is in the Remarks section, where BOL states that “to avoid unpredictable behavior”, you need to enable trace flags in one of two ways: Either by

a) Using startup parameters with the –T flag as I outlined in my previous post, or

b) Running DBCC TRACEON(####,-1) when nothing else is running queries.

It’s that second part that’s tough – as you’re rarely going to find that on a busy production server.

In my experience, I can usually get a trace flag to ‘stick’ on busy servers. But not always.

So, the take-away is that if you’re going to use the approach that I outlined in my previous post, you’ll want to make sure you keep an eye on things – and if the trace flag you’re putting into place is something that’s mission critical or could have serious impact were it to ‘captian kirk’ in and out of existence… then you’ll want to watch out for “unpredictable behavior” and only start those trace flags off with a service restart.

The Dark Art of Trace Flags

Truth is, Trace Flags are NOT really a dark art. But one thing I’ve noticed about the web’s ‘coverage’ of Trace Flags is that there are very few demonstrations of how to set Trace Flags that explain the process entirely, or in ‘idiot-proof’ steps.

Trace Flags in Review

If you’re not familiar with SQL Server Trace Flags, the the simplest way to define them is to say that they’re optional arguments, switches, or parameters that you can apply to SQL Server to make behavioral changes to the way the engine, certain features, or operations, act and behave.

Official documentation for SQL Server Trace Flags can be found in Books Online.

Sharp StickWhat that documentation fails to cover however really comes down to two things:

1) The use of Trace Flags is only something you should undertake with great care.  You can easily destroy a production system by playing around with them. In other words, they’re a sharp stick – and sharp sticks are great for some things. They can also poke your eye out if you’re not careful.

2) It doesn’t really describe HOW you go about adding a Trace Flag during startup – though it mentions that as an option.

Trace Flag Scope

By default, most trace flags that you set using DBCC TRACEON() will stay operation, or within scope, until the server reboots or you run DBCC TRACEOFF().

Therefore, if you’d always like a Trace Flag to be on, you’ll need to set it up as a startup parameter – meaning that when the SQL Server Service starts up, it will then apply the flag in question as part of the current operating or working environment. (Of course, you can still turn off most Trace Flags that were initialized at startup by just using DBCC TRACEOFF() if you need to temporarily turn them off, or if you’d like to turn them off and don’t want to restart SQL Server. More on that in a second.)

Setting Trace Flags as a Startup Option

To be fair, Books Online does detail how to set trace flags. (And maybe that’s why I’ve not found this covered too much on other sites/blogs when I’ve looked for instructions to send clients on how to make these changes.)

Specifics for changing/setting startup options are found here.

And there’s even a link to more specific instructions on configuring startup options as well.

Ironically, I didn’t know of either of these two links until I started writing this post. Which makes me a bit lame.

So, I’ll redeem myself a bit by providing some screen shots, and a concrete example.

Suppose, for example, you’ve got a system where you want to suppress successful backup messages. In many environments this wouldn’t make sense. But if you’ve got a multi-tenant server where there are lots of databases (say 5-20) and you’re doing regular log file backups every 15 minutes or so in keeping with best practices, then you’ll be spamming huge numbers of success messages into your Windows Event Logs and SQL Server Logs. As such, rather than letting those messages turn into noise that might distract you from noticing problems/errors, TRACE FLAG 3226 starts to make sense.

If you just want to temporarily suppress those messages, you can fire off DBCC TRACEON(3226) and those messages are suppressed – until the server reboots. And then they’ll be back.

If you want to make that flag/option ‘stick’ across reboots, then you need to add it as a startup parameter.

1. Fire up SQL Server Configuration Manager by going to Start > Programs > Microsoft SQL Server (2005/2008) > Configuration Tools > SQL Server Configuration Manager.


2. Select SQL Server Services, then right click on SQL Server (MSSQLSERVER) and select the Properties menu option.


3. In the Properties window, click on the Advanced tab, and then find the Startup Parameters option. Click the little down-arrow next to it and you’ll have more area to work in.


4. This is where you add any startup parameters you want to add – including trace flags.

In this case, since I want to add Trace Flag 3226, here’s what the startup parameters would look like.



Note that to add the Trace Flag, I had to ‘terminate’ the previous options with the addition of a semi-colon. Then I was able to add the –T flag and the desired trace flag. 

As per Books Online, make sure you use the UPPER CAST “T” to set trace flags. Lower-case “t” will work – but implies that you’re setting flags allowed by SQL Server Support Engineers – which implies lots of ugly potential that you do NOT want.

Likewise, if you want/need to add multiple trace flags, then you just keep appending them like so:
;-T3226; –Txxxx; –Txxxx

Again though, just remember that Trace Flags can be a lot like juggling flaming, loaded, sawed-off, shotguns.

And, of course, in order to make your startup options ‘stick’, you’ll have to restart SQL Server. You can do this immediately by stopping/starting the service as per the screenshot below (and if you’re prompted to stop SQL Server Agent – go ahead and do that; just remember to RESTART it after you bring SQL Server back online):


Best Practices For Turning Trace Flags On/Off

Restarting your SQL Server just because you want/need to put a Trace Flag into play long term can often be a pain. (The same goes for cases where you’ve got a trace flag installed, and want to remove it.)

Therefore, what I’ve found that works really well when I’ve made sure that I want to set a trace-flag long term is to:

a) Go in and Modify the Startup Parameters with the needed trace-flag as specified above.

b) I don’t restart the SQL Server Service.

c) Instead, I use DBCC TRACEON(####, -1) – where #### is the id of the Trace Flag that I set in step 1 and –1 is the switch/option that pushes this trace flag global.

This process works well as it kicks the Trace Flag into play immediately, and the next time the server is normally rebooted (i.e. service packs, hardware swap-out/swap-up, whatever) the startup parameters will kick in and the trace flag will ‘persist’ in scope across restarts and go into active service.

The same approach works when I want/need to remove a trace flag as well. I just nuke it using DBCC TRACEOFF(####, -1), and then remove the corresponding startup parameters.

UPDATE: Thanks to Reader/Visitor “Developer Seer” who pointed out that I forgot to put the –1 switch into my example in step c above. This is the key/trick to getting your Trace Flags to become GLOBAL in scope – or to make them persistent.

Adding External References to SQL CLR Projects

A common need when working with SQL CLR assemblies (or projects) is to add a reference to a third-party (or external) assembly. Interestingly enough though (at least as far as I can tell), the steps for doing this are hardly ever documented.

So this blog post will fix that – by documenting just how you go about adding External References in Visual Studio to SQL Server CLR projects or assemblies.


In my recent webcast: Introduction to SQL CLR Extensibility, I was hoping to cover exactly how to do this. I was also going to (and partially did) throw out a few caveats when doing this.

Specifically: don’t go crazy adding external references. Good code IS reusable code – so good developers may occasionally run into the need to drop of ‘core’ logic into their CLR assemblies. On the other hand, lots of complex business logic just isn’t going to find a good home in your SQL Server’s memory-space; bundling up tons of business rules and the likes into SQL CLR assemblies (or as referenced assemblies) is a great way to bloat your CLR memory usage and, more importantly, will make versioning your code PAINFUL if you start to abuse this power.

Setting the Stage

For this tutorial, I’m going to assume that you’ve already GOT a CLR project/assembly that you’ve already deployed successfully to SQL Server – and that you’ve got it running as needed (minus the external references). If you don’t have that, then watch the webcast from the link provided up above, and it will show you how to get to this ‘assumed’ point.

To that end, if you’re an experience .NET Developer, your first instinct is to right-click on the References folder in Solution Explorer within your solution, and click on the Add Reference option. That in turn brings up the following:


The only problem, of course, is that it doesn’t have any ‘browse’ options or tabs. It does let you reference other projects – but even that has some serious limitations too – to the point where even if you were to add your existing ‘solutions’ as projects and try to build them and reference them, you won’t be able to do so.

What’s Going On?

The key to adding external references to a SQL CLR project is understanding that the ‘Add References’ dialog isn’t showing for what’s on YOUR development machine. Instead, it’s polling the SQL Server you’ve CONNECTED TO as part of your project. So, if you’re a developer you’re used to Visual Studio letting you add references to projects and DLLs that exist on YOUR workstation. Then, when you deploy or publish your code, those references are bundled along with your code and copied out to the destination.

SQL CLR projects DO NOT work that way. Instead, they ONLY let you reference assemblies that are already located on the server and which have already been REFERENCED with the server.

Once you know that, the rest is easy.

Adding Reference Assemblies to Your SQL Server

So, since you can only add ‘registered’ assemblies to your projects, the next question is how do you do that?

In this tutorial, I’m going to start with a brand NEW assembly. To do that I’m going to do the following:

Open Up a Brand New Instance of Visual Studio (I’ve already got my existing Solution open).

Create a New Project.

I’m going to Specify the Visual C# Type, and specify that I want to create a Class Library:


I’m going to name this Class Library ‘CoreLogic’:


Then, in that Class Library, I’m just going to add in some ‘core’ logic that I want to be able to ‘consume’ from my referencing assembly.

In this case, I’m just going to use some patently lame logic – which flips a string around (i.e. it turns ‘string’ into ‘gnirts’):

Then, at that point, I’m going to BUILD my ‘CoreLogic’ project, and then copy the CoreLogic.dll that came from the \bin\Debug\CoreLogic.dll folder within my project out to a location on my server.

From here I can then register the ‘CoreLogic.dll’ on my server – and it will then be available as a potential reference from within my other solution.

To register my assembly, I just need to use CREATE ASSEMBLY as follows:


And note that I’m creating this assembly in the DATABASE that I’ve connected to in my SQL CLR project (i.e., the project where I want to be able to add the external reference).

Full Circle – Adding the Reference to your CLR Project

Now that I’ve created the assembly up on my SQL Server, if I go back to the original SQL CLR project where I want to be able to add a reference to my ‘Core Logic’:


I’ll see the reference to CoreLogic show up if I again right click on the References node and select Add Reference:


From here I’m able to then add a using statement and so on – just as you would expect with a ‘normal’ .NET project.


Depending upon how you’re deploying your SQL Server CLR Assembly or Project, you may run into some ugly issues when you now try to push up your new assembly – and it has references to external resources.

The best way to deal with this in my experience is just to clean house (i.e., destroy your CLR assembly) and then re-register it.

To do this:

1) DROP any sprocs, udfs, triggers, types, etc that DEPEND upon YOUR CLR Assembly.

2) Issue a DROP ASSEMBLY MyAssemblyNameHere command – and you’ll now free up all the ‘references’ that this assembly thinks it has.

3) (With your external assembly already registered/loaded) Issue a CREATE ASSEMBLY MyAssemblyNameHere command – or push ‘Deploy’ from Visual Studio again – and it will push your CLR Assembly up to the server.

4) Recreate any of your sprocs, udfs, triggers, etc that you nuked in step 1.

Hiding Databases from Public and Other Users

In my recent presentation on Common SQL Server Security Mistakes, a question was asked about what to do with the public role – from a security standpoint.

My response was that, in the majority of cases, the public role isn’t something that you’ll need to worry about. But, that since the public role does ‘leak’ or ‘disclose’ information about objects and databases on a given server, that can be a problem in a very small minority of environments. I therefore recommended, that in those (typically extreme) edge-cases, that removing the public role would make sense – only that if you do undertake such an endeavor, you should watch out for potential side effects.

In this post I’m going to clarify that answer – and provide some examples.


As a point of clarification, you can’t remove public at all. You can’t remove the public role from your server, nor can you remove it from your databases. Likewise, you can’t remove logins or users from membership in the public role. (Every once in a while you’ll see questions on forums from someone who has MANAGED to do this – but it effectively breaks SQL Server and isn’t supported. Not sure how people manage to do this either – and don’t care, because i’m NOT advocating it.)

Being a member of the public role is a requirement for working with SQL Server.

So, when I mentioned that you might want to REMOVE the public role in some cases, what I actually SHOULD have said was that: in some cases, you might want to remove the VISIBILITY that the public role grants; but, in doing so, just be forewarned that in some cases that MIGHT cause problems with some logins in some very goofy edge cases.


The best way to get a sense for what I’m talking about is to look at some concrete, step-by-step, examples.

For these examples, let’s assume the following:

  • You’ve got a highly sensitive database called Customers.
  • You have OTHER critical databases on the same Server.
  • You have a web application – that connects to the server’s Customer database via a Login called Web_App[1].
  • You also have some logins for employees within your organization – who need to access the server for regular reports or something. We’ll call one of these user Wilson[1].

For these examples, let’s say that you’ve already done due-diligence from a security standpoint – and both the Wilson and the Web_App logins have been mapped as users into the Customers database with minimal permissions – such as being granted db_datareader, db_datawriter and/or the the ability to execute a couple of trusted stored procedures.

Under a scenario like this, if Wilson were to log into SQL Server Management Studio against your server, his membership in the public server role (all logins are made members of the public role at the server level) would grant him a view of all databases, like so:


And, since the Web_App login is also a member of the public server role, if hackers were able to compromise your web app, and gain access to the server (i.e. via SQL Injection), they’d be able to do:


And get the following results:


In other words, they’d be able to see the same things that Wilson (or any other login on your server) would see.

In most cases, this is not an issue. If it is an issue, you’ve got two options for dealing with this.

First, you can remove public’s ability to SEE all databases on the server. To do this, just run the following query:


This operation denies all logins (since all logins are members of the public role) the ability to see databases on the server that they don’t OWN. Which means that users that are mapped as db_owner will still see ‘their’ databases within the Object Explorer or if they query sys.databases. (And since the SysAdmin, or sa, account is automatically granted db_owner in all databases, the SysAdmin account will still see everything.)

But, for logins like Wilson or Web_App which are NOT members of db_owner, what they can see is drastically limited, as per the following screenshot:


As you can see, in this case, Wilson isn’t able to see anything – other than the master and tempdbs. In fact, even though Wilson has permissions in the Customers database, he’s still not able to see the database in the Object Explorer. He is, however, still able to log in, and can execute the same queries against the Customers databases as before. He’s just lost the ability to ‘see’ the VIEW for ANY database.

Likewise, if the Web_App user issues the same query against sys.databases as before, they’re only able to see the following:


In this case, there IS an entry for the Customers database – but that’s just signifying permissions in the database – it doesn’t indicate the ability to peruse the schema of that database. In fact, take a look at the results of the following query in terms of what’s returned (apparently just a bunch of Service Broker objects that apparently didn’t get the memo on ‘lock down’ against public ‘disclosure’):


A Less Aggressive Approach

Denying VIEW DATABASE to public is a pretty heavy-handed approach. It’s an all or nothing proposition.

So, if you try that and it doesn’t work, you can easily ‘undo’ it with the following:


More importantly, you can implement the same safegaurds at a much more granular level – meaning that if you want to limit the information disclosed should hackers manage to compromise your web application (through SQL Injection or something else), you can just use the following:


And this will then let Wilson and all other logins on the system continue as before – but the Web_App login will be restricted in terms of what schema it can see or view.

TIP: If you end up locking down an individual user account (or role) so that it can’t see all databases in the Object Explorer, do your users a favor and make sure that the default database specified in their login is the database that they can use. Otherwise they’ll have to do a USE databaseNameHere query just to get started with things – as they can no longer right-click on the database in question from the Object Explorer and select the New Query option.


Best Practices

Ultimately, hiding databases from members of the public role is something you really don’t need to worry about in most environments. Though there’s no harm in restricting application logins or other publicly-facing accounts in this way. In fact, it’s a best practice to make sure that any applications have their logins RESTRICTED with DENY VIEW ANY DATABASE TO <appLoginNameHere> once the app is up and running. It’s a simple, added, extra precaution that helps contribute to defense-in-depth by mitigating any potential issues with information disclosure.

Of course, if you’re working in an environment where SQL Injection is a possibility (and sadly, there are plenty of apps out there where SQL injection IS a possibility, but the app can’t be immediately taken offline and fixed like it should be), then using this approach can be part of a mitigation strategy. It is not, however, a sufficient protection. (All it can do, when combined with granting applications least-privilege access to ONLY the resources they need, is mitigate the scope of damage hackers can/will inflict once they compromise your application.)

And, in highly sensitive environments where employees need to be denied access to even knowing about the existence of certain databases, using the approaches outlined here can be an easy way to accomplish those goals. I’d just recommend going with the least heavy-handed approach possible (by blocking individual logins instead of the public role) when feasible.

Otherwise, just be aware that in some edge-cases if you DENY VIEW ANY DATABASE to some users, they’ll actually encounter full-blown login problems in some goofy cases (I swear I’ve seen some with SQL Server Reporting Services databases) where SSMS won’t like this restriction. Though, problems with this should be ‘once in a blue moon’ problems – which is further reduced by the fact that implementing the suggestions outlined here really don’t apply to most environments. 

[1] I’m using SQL Server Authentication in these examples (and with STRONG (i.e. LONG) passwords this is secure enough – but in really highly-sensitive environments, you’re better off using trusted security).

Protecting SQL Server Data

Here’s a quick collection of additional links and resources for folks who followed along with my Protecting SQL Server Data presentation today.


First and foremost (of course) I’ve got a couple of videos for anyone interested in learning more about the basics of backups, recovery, logging, and so on. Likewise, I’ve also got some videos on best practices for when it comes to implementing disaster recovery plans, and a number of how-to videos that will walk you through the process of implementing actual backups and working through restore and recovery operations:
Backups Demystified
SQL Server Logging Essentials
SQL Server Backup Best Practices
Restoring Databases with SQL Server Management Studio
The Full Series of 15 Free Video Tutorials on SQL Server Backups


Above and beyond the basics, we covered a lot of options and details in our session today – and if you’d like to learn a bit more about options when it comes to making your data more redundant, go ahead and give these links a stab:

Failover Cluster Documentation

SQL Server 2008 Failover Clustering Whitepaper

Denny Cherry's SQL Server 2008 Clustering Articles in SQL Magazine

Log Shipping:
Log Shipping Overview

Step By Step to Configuring Log Shipping

Log Shipping Fail Over

Log Shipping and Additional Dependencies

Database Mirroring:

Database Mirroring Overview


Providing High Availability with Database Mirroring

Implementing Failover with Database Mirroring

Performance Best Practices for Database Mirroring

Using Database Mirroring AND Log Shipping Together

Combining Log Shipping and Replication

Log Shipping for Reporting Purposes
Think house-boat: Log Shipping is good for HA OR Reporting. Don't try to blend both into same solution.

Books Online Overview

Management of Replication Agents
This is where the BULK of your more ADVANCED performance and configuration options will be.

Great Resource for Replication Troubleshooting


Finally, if you had some questions that weren’t answered during our brief session today, or if you think of some questions in the next few days, feel free to ping me. I’m always happy to help folks out with questions – as it’s a form of marketing and just a general way for me to stay abreast of the kinds of problems organizations are struggling with. So, if you’d like to ask a couple of questions (or if you’d like to talk to me about a potential engagement), make sure to head over to my SQL Server Consulting site or visit my contact page to drop me a line.

Power Up with SQL Server EMEA

For visitors to my blog looking for the links/content I promised in my Power Up with SQL Server presentations, you can find the links here:

Power Up with SQL Server: SQL Server High Availability

Power Up with SQL Server: SQL Server Performance

Likewise, if you have any questions about these presentations, please don’t hesitate to contact me.

(And in case you didn’t hear, Niels Berglund was going to present these sessions but was in a serious car accident and wasn’t able to present. I sadly don’t know more than that – but hope he is doing well.)

Microsoft Needs to Start Slip-Streaming Patches for SQL Server

Microsoft REALLY needs to start creating slip-stream downloads or .ISO files of SQL Server with Service Packs Rolled in.

They provide instructions on how to slip-stream Service Packs into SQL Server Installation DVDs by yourself, but, honestly, who wants to go through all of that headache. Take a look at the instructions, and you’ll see what I mean.

More importantly, that’s NOT always feasible. Right now I’m installing SQL Server on a machine for a client. Only the machine in question is a hosted/managed server. So we either had to snail-mail the installation DVD, or download the .ISO from MS according to the license agreement. Downloading is OBVIOUSLY the nicest choice, but I’m back to being stuck with a SQL Server 2008 installation DVD that doesn’t have SP1 slip-streamed in. (Likewise, if I wanted to slipstream SP1 into play by myself, I’d then have to snail-mail/FedEx the disk to the hosting company. Lame.)

Without the slipstream in place?

Well, that’s where things get seriously lame, as the process to install SQL Server 2008 on Windows Server 2008 R2 is cheesy:

A) Install Service Pack 1 for SQL Server 2008. The installer will install a bunch of needed files, then will ‘barf’ when it doesn’t find a SQL Server instance to patch.

B) Install SQL Server 2008.

C) Now install SQL Server 2008 SP1.

If you don’t go that route, you won’t be able to install the patch, and your SQL Server won’t run (as J.D. Wade points out in this blog post).

Call me crazy, but it would be MUCH nicer if Microsoft made SQL Server 2008 with SP1 disks/.ISOs available for download. (Especially when Windows Server 2008 R2 won’t even let you install SQL Server 2008 without SP1 – without warning/nagging you that you’re installing a version of SQL Server with known compatibility issues.)

Power Up with SQL Server: SQL Server Performance

Here's a list of additional resources for attendees of the SQL Server Performance session from today's Power Up with SQL Server Event.

Feel free to contact me if you're looking for a specific resource, or had additional questions that weren't answered in our session.

Performance Background

B-Trees (Wikipedia)

SQL Server Indexing Basics

Clustered Indexes vs Heaps

The Benefits of Clustered Indexes

Index Fragmentation

BookMark Lookups

Performance Troubleshooting

Generic KB for Performance Troubleshooting


Troubleshooting SQL Server 2008 Performance Problems

Troubleshooting SQL Server 2005 Performance Problems



Performance Monitor

Setting up Traces (on Win2000,2003)

Trace Templates

Analyzing Performance Counters
Distilled version of above counters:

SQL Server Profiler

Using SQL Server Profiler

Establishing a baseline

Finding Top Offenders



Index Tuning Problems


Index Selectivity

Index Selectivity and Column Order

Power Up with SQL Server: SQL Server High Availability

Here's a list of additional resources for attendees of the SQL Server High Availability session from the Power Up with SQL Server event.

Feel free to contact me if you're looking for a specific resource, or had additional questions that weren't answered in our session.

Decreasing Down Time

SQL Server Enterprise Edition

Partial Database Availability

Moving Logins for Log Shipping and other HA needs

SQL Server Clustering

Stretch Clusters

Multi-Site Clusters

Failover Cluster Documentation

SQL Server 2008 Failover Clustering Whitepaper

Denny Cherry's SQL Server 2008 Clustering Articles in SQL Magazine


SQL Server Log Shipping

Log Shipping Overview

Step By Step to Configuring Log Shipping

Log Shipping Fail Over

Log Shipping and Additional Dependencies


SQL Server Database Mirroring

Database Mirroring Overview

Providing High Availability with Database Mirroring

Implementing Failover with Database Mirroring

Performance Best Practices for Database Mirroring

Using Database Mirroring AND Log Shipping Together

Combining Log Shipping and Replication

Log Shipping for Reporting Purposes
Think house-boat: Log Shipping is good for HA OR Reporting. Don't try to blend both into same solution.


SQL Server Replication

Books Online Overview

Management of Replication Agents
This is where the BULK of your more ADVANCED performance and configuration options will be.

Great Resource for Replication Troubleshooting

Power Up with SQL Server: SQL Server Virtualization

Here's a list of additional resources for attendees of my SQL Server Virtualization Session. Feel free to contact me if anything you were hoping to find here isn't listed.

There were also lots of questions towards the end of this session – if I wasn't able to answer your question, just contact me directly.

Virtualization Background

Demystifying SQL Server Virtualization

SQL Server Virtualization FAQs

HyperVisors (Wikipedia)

Hardware Assisted Virtualization

Hyper-V vs VMware
(though published earlier this year, this article is already out of date)

SQL Server 2008 Virtualization Page


Virtualization Benefits

VMware's vSphere 4 and SQL Server

VMware Marketing about Virtualization/SQL Server Benefits

Microsoft Marketing about Virtualization/SQL Server Benefits

Virtualization Concerns

Microsoft's Official Support statement for Virtualized SQL Server

Detailed Licensing Information from Microsoft

Less Detailed Licensing (but still overly complex)


Performance Monitoring/Tuning

Performance Monitoring on VMware Guests

Performance Monitoring on Hyper-V Guests


Stress Testing

Stupid Name. Fantastic tool (one that I recommended for SQL Mag's recent list of free tools)

Not a bad tool – harder to set up and use than SQLIO though

Memory Testing
As Linchi points out: data is old/dated. But the concepts here are great.

Virtualized IO

SQL Server and SAN Interactions

SQL Server, SANs, and Virtualization

VMware Guidance on SAN Design/Implementation
(a bit dated, but concepts are still great)


SQL Server Best Practices

Best Practices Home

Predeployment IO Best Practices

SQL Server on Hyper-V Best Practices and Performance Recommendations

Performance Best Practices for vSphere 4.0

Microsoft SQL Server and VMware Virtual Infrastructure (Best Practices)

SQL Server Performance in VMware 3

Poor Man's Scripting Tip

I love that you can script Publications for SQL Server Replication. I find that this provides a really good way to 'back up' all of your settings in case you need to re-initialize, make any big changes, or run into any problems – because serialized configuration details are tons easier to manage in the form of a script than trying to re-create a replication topology by virtue of remembering all of the options and configuration details that you specified (sometimes up to a few months) earlier on.

I also find that the UI and wizards used to set up publications also don't provide enough control or detail. But, I don't like generating entire publications from hand either. So generating basic publication details with SQL Server Management Studio and then outputting those details to script is an ideal way to get me 90% of where I need to be. And then I can just modify the generated scripts as needed.


One thing I don't like though, is that these generated scripts suffer from one-line-itis – in the sense that SSMS generates scripts with entire calls to Replication sprocs on a single line. Which can be a total pain to sort through when you're dealing with sprocs that have 10, 20, or more parameters.

Accordingly, a trick I use when playing around with SSMS generated scripts (not just replication scripts either) is to use poor-man's reformatting. And to do that, I just do a Find and Replace, specify that I want to use Regular Expressions, and then tell it to push the start of every parameter on to its own line – by replacing the literal text of @ with an @-sign preceeded by a carriage return and a tab – resulting in instantly formatted text. Or, as you can see from the screenshot below, I just replace \@ with \n\t\@.


Find and Replace:




Tip: Make sure SQL Server AWE Memory is Enabled

If you're running a 32-bit version of SQL Server, you should make sure that AWE Memory is enabled. Amazingly, I find that pretty much half of the systems I audit aren't configured correctly.

Accordingly, I wrote about it in my latest DevProConnections Update/Newsletter. So check it out – as you could be hosing yourself.

Hyper-V Unleashed: Resources

I just finished up my Hyper-V Unleashed online presentation – and was happy to see that there were over 600 attendees.

At the end of my presentation I provided a slide with a list of additional resources – and a number of people have asked for those links. There were also a couple of great questions that I've touched upon with additional resources as well.


Hyper-V Home on Technet:
- http://technet.microsoft.com/en-us/library/cc753637.aspx

Understanding Microsoft Virtualization Solutions eBook:
- http://csna01.libredigital.com/?urmvs17u33

Error Code Dictionary:
- http://technet.microsoft.com/en-us/library/dd578337.aspx

John Howard / HVRemote:
- http://blogs.technet.com/jhoward/archive/2008/11/14/configure-hyper-v-remote-management-in-seconds.aspx
- http://blogs.technet.com/jhoward/archive/2008/03/28/part-1-hyper-v-remote-management-you-do-not-have-the-requested-permission-to-complete-this-task-contact-the-administrator-of-the-authorization-policy-for-the-computer-computername.aspx

VHD Servicing Tool 2.0
- http://blogs.msdn.com/taylorb/archive/2009/01/06/vhd-servicing-tool-2-0.aspx
- http://technet.microsoft.com/en-us/library/cc501231.aspx

- http://code.msdn.microsoft.com/vhdtool

QUESTION: Transferring OEM licenses to work as guest licenses on Hyper-V hosts:
- http://blogs.technet.com/jhoward/archive/2005/05/27/405432.aspx
The above information is QUITE dated – but still accurate: You typically can NOT transfer OEM licenses to new hardware – so consolidating OEM licenses runs afoul of licensing terms in most cases. However, SOME OEM terms will allow you to transfer licenses to new hardware - you just need to consult your specific terms from your vendor.
Microsoft will let you look up those terms here: (Only works in IE)
- http://www.microsoft.com/about/legal/useterms/
Likewise, you can also call 800-426-9400 to get more information/help.

- http://technet.microsoft.com/en-us/library/dd446676(WS.10).aspx

QUESTION: More Details on Enlightenments (including Win2k3 Device Drivers)
- http://blogs.msdn.com/tvoellm/archive/2008/01/02/hyper-v-integration-components-and-enlightenments.aspx
- http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/thread/1b755fa5-1023-4817-98b3-263368edc022

Babies, Unmerciful Cruelty, and ALTER vs DROP/CREATE

One of the intriguing things about newborns is their propensity to attack themselves. After a few weeks their little hands begin to take on a life of their own: flexing, probing, grabbing. Since their stubby little arms are so short, there isn't much territory to explore for those little hands, and they invariably end up by the head, ears, face, etc.

There's nothing quite like seeing an infant's little hands grab an ear. It's like watching body parts from two, unconnected, people. The little hand finds the ear, and then GRABS on to it. The child, who has been minding its own business, realizes that something isn't right. The pain that their little hand inflicts upon them causes them to tense up. The sad part is that where their hand was, previously, acting as a seperate entity, it too now tenses up. Within a matter of a second, a happy, cuddly little newborn goes from a world of innocence to one where they are being attacked with unmerciful cruelty. (And trying to get them to let go can be a chore too.)

Sadly, newborns aren't the only people susceptible to such self-inflicted unmerciful cruelty. DBAs occasionally attack themselves in the same way.

Background: Once upon a time in the land of IT there lived a huge, hideous, beast of a website. Like Frankenstien, it had been cobbled together from various 'parts,' both living and dead. It had, however, been cursed with a fair amount of success -- to the point where a complete reinvention would have been out of the question from a business standpoint. So, this frankenweb was therefore given yearly sacrifices of developers. Entire teams would be built up around this monster, only to be gobbled up every few years. Only rarely would someone escape, by running, screaming out of the building.

As it turns out, a modification needed to be made to the site one day. Modifications were par for the course; they could be sewn directly on to the outer hide of this frankenweb, further increasing technical debt, but giving business an empty, and pointless, feeling of accomplishment.

There was, for the longest time, no suitable dev/testing/integration environment, and as it happened, a sproc needed to be changed out in production. A time window was selected, when the site was only making about $40k/hour, and the change was to go in to place.

Never one to just wing it, I had done my best to create a suitable test environment at the last minute, and had tested the functionality. It all worked fine in my test environment (that wasn't an exact match of the production environment.)

As the production DBA it was my responsibililty to push the change to the live site's database. Four or Five coworkers sat at the ready (at their consolses) as I made the change -- and they quickly confirmed my worst fear: "The site is broken, I can't add things to the cart." said one, and then another...

My fingers had just found the soft lobe of my right ear...

Damn. Well. No biggie. Stay Calm.
There was no way I was going to let this hinder me -- I was better than this... I had a rollback script. I fired it against the DB/Site. It executed perfectly.

"It's still down. I still can't add to the cart." reported a coworker.
"I've got problems too. No go." said another.

Sure enough. No dice. Functionality was broken.

I tensed up. The pain in my earlobe was excruciating. Someone was attacking me with unmerciful cruelty....

What followed were a few minutes of stark raving horror. About 12 minutes en toto. Eventually I was able to calm down and think (after I got my pacifier). The only thing that made sense was that when I had done my rollback script I had intentionally done a DROP and a CREATE on the sproc in question. I wanted to make sure that there was no way that older logic 'remained' and tainted the works.

Now the problem with doing a DROP/CREATE is that the old sproc goes away. All of it. Including permissions. I KNEW this. (When you do an ALTER, the sprocs objectID persists, and any perms on it remain.) I also 'knew' that it wasn't an issue -- because as frankenweb as the site was, I knew that the whole thing sadly ran as the sa.

Only, the only thing that made sense was that this was some sort of a permissions error. A few quick checks later, and I was able to confirm, to my continued horror -- although it quickly spread to relief, that the piece of functionality in question actually DID execute as a user other than sa.

Analysis of the problem afterward showed that: the initial change would have worked, the webservers should have been reset was all. Doing the rollback is what HOSED me.

Moral of the story:
1) Babies are cool.
2) Test in a REAL environment. Don't settle for unmerciful cruelty. (My second priority, after getting the site to run as non sysadmin, was to work on creating a perfect test/integration environment -- I never got there completely, but got close enough for comfort before I ran shrieking from the building.)
3) Rollback scripts are a MUST. Only make sure you script the permissions, or JUST use ALTERs unless you like crying like a baby when you attack yourself unmercifully.


NOTE: This post is (moderately) adapted from one of my posts out on REPEATABLEREAD.

Scripting Anomalies, SSMS 2008, sprocs, and sp_executesql

If you're using SQL Server Management Studio 2008 to manage database objects, you may run into an issue that causes you some grief when you right click on a sproc and either specify the Modify or Script > Create options. Specifically, you may see something that wraps your entire sproc in a call to sp_executesql, like what follows:


Not only does this behavior needlessly complicate things, but it also makes ad-hoc modifications to your sproc a bit more difficult as any ticks that you had in your actual sproc body, now have to be escaped (i.e. ' becomes '') and then you also need to make sure that you keep on top of that with any modifications or edits you may make.

Personally, I'd rather see the scripts generated by SSMS look more like the following – as that's also what they look like under the covers:


As near as I can tell, there are at least one (possibly two) factors involved. First, the exact issue that causes this problem is the scripting option "Include IF NOT EXISTS clause", as shown below.


When that option is set to True is when I've encountered the problem.

Or, more specifically, it appears that when I set that option to True, connect to a SQL Server 2000 server, and generate a script – then I run into the problem with sprocs from SQL Server 2000. More interestingly though, is that this seems to break something internally, because then my scripts for SQL Server 2005 and 2008 servers/objects also get broken.

Seems like a bug to me, but maybe it's just a 'by design side-effect'. Either way, switching the option to False seems to solve the problem for me.

Another SSMS 2008 Improvement: DROP and CREATE

In a recent video on improvements added by SQL Server Management Studio 2008, I covered a number of bigger, and even smaller, changes to SSMS introduced in SQL Server 2008.

One very cool (and very subtle) feature I didn't explicitly call out though deserves some mention: the new addition to generate DROP and CREATE scripts in a single action.


Granted, all this option really does is combine the ability to independently generate DROP and CREATE scripts so it's not that big of a deal.

But it can help out in a number of scenarios, and I've found it actually quite useful in a number of recent DBA-related activities where I was cleaning up objects and wanted to ensure that I also had rollback scripts and so on…

Getting Ready to do Something you Might Regret?

If you're mucking around in your database, and need to save your changes in a transactionally safe manner, but JUST might want to leave a flag or some kind of marker that would let you know what you need to return to should you later on decide it was a bad idea, then SQL Server has you covered:

WITH MARK 'Prior to Doing Stuff I might Regret'

Then, if you decide you need to undo your changes, you can always throw in the STOPATMARK clause as part of a RESTORE + RECOVERY. (Of course, ANY changes you've made SINCE that mark are toast, but it's still nice to have the option.)

Books Online covers this in a bit more detail, and it works with SQL Server 2000, 2005, and 2008.

Aggregating SQL Server 2008 Date and Time Values

As I covered in a recent video outlining the new Date and Time Data Type improvements to T-SQL 2008, T-SQL now has distinct Date and Time data types. And given that the Time Data Type only tracks time, you might be inclined to think that the following would work as a way to get the sum of all times – because, after all… Time is the only value being tracked:

	TimeValue time NOT NULL
INSERT INTO @Times (TimeValue)
VALUES ('00:12:00'), ('00:22:18'),('00:03:49')

SELECT SUM(TimeValue) FROM @Times

So, it's sort of natural to just assume that SQL Server will add up all the time values in the temp-table and return their SUM as '00:38:07'.

Only, if you try running such a query, you'll get an error:

Msg 8117, Level 16, State 1, Line 8
Operand data type time is invalid for sum operator.

That, of course, is because you're throwing SQL Server for a loop by not being specific with what you want aggregated. Do you want to know the total number of hours? minutes? seconds? milliseconds? nano-seconds?

Accordingly, if you want to aggregate time, you need to specify exactly what units you wish to aggregate. So, the following would work:


But it, of course, only tracks minutes. So, if you're interested in other DATEPARTs you'd have to account for those independently as well – and the sticky part comes when you need to 'carry' over values from one time segment to its parent and so on. (Such as in the case when 67 seconds needs to be converted to 1 minute and 7 seconds.) As such, this is why you'll typically see these kinds of aggregations done from with applications or within UDFs…

Where's That Been All My Life?

So, I was poking around SQL Server 2008 Management Studio the other day to generate some scripts.

My eyes almost bulged out of my head.

Here's a screen-cap of the 'Choose Script Options' page that caught my attention. The same screen from SSMS 2005 is in the background, with the new option in SSMS 2008 in the foreground:


Best of all, works EXACTLY as you would suspect.

Finally, SQL Server gives us the ability to effectively 'dump' a database.

Expect some new videos (on this and other topics) in a week or so.

SQL Server 2008 Installation - it's the 'little' things...

There are lots of great things to be excited about when it comes to SQL Server 2008. (And shortly after completing my series on SQL Server Backups, I'll create a whole series detailing new features and benefits.)

But in terms of installation and configuration there are some really great, yet very minor, additions/changes that will largely go un-heralded in other blog posts and articles.

One of those changes is that the installer will remind you to punch appropriate holes in your firewall.


It's not like you won't remember to do that anyhow... but if you're like me (and do lots of SQL Server installations on virtual machines) you'll find that reminder up-front to be nice as you'll be able to go and toggle the firewall open while the installation is proceeding - instead of waiting until right-before you get ready to connect from an external machine.

The other, bigger, benefit? That horrible, horrible hack called the SQL Server Surface Area Configuration Manager is now GONE. (I always detested how managing security infrastructure and options for SQL Server required another, external (and not to mention insanely slow/sluggish) tool. Now, once you're done with a SQL Server 2008 installation, you're greeted with a much nicer conclusion that doesn't require you to jump through a bunch of hoops:


New Video: How to Use SQLServerVideos.com

Check out this video for a light-hearted look at the best way to take advantage of all of the resources available on SQLServerVideos.com.


Video Length: 05:46
Video Summary:Free videos are nice, but did you know that SQLServerVideos.com (SSV) Provides a bunch of other resources geared at helping you learn SQL Server? To find out how to best use the site and to take advantages of these additional resources, check out this short and informative video.

Full Text Indexing Overhead

I think that there's a tendency among DBAs and SQL Server wonks to be a bit dismissive of Full Text Indexing. I think that in a lot of cases people tend to think of it in terms of adding significant overhead.

For a long time I actually looked at it as being a more powerful, but more EXPENSIVE, form of searching. Then I actually had the chance to work with it (in a past life) a good deal, and found that it didn't really add that much overhead in a lot of cases. In fact, I found that it was actually pretty cool technology and afforded a lot of functionality with no noticeable performance hits. Though, I found that indexes frequently crashed a lot and needed an extra bit of 'love' to keep them functioning correctly. (This was all on SQL Server 2000.)

But in some cases it can yield insane performance benefits
Fast-forward to today. I'm currently working with a client that has a decent sized database (about 20GB). More than 5GB of that DB is made up of a 'Books' table - which has about 8M rows. The table has both and author and title domain, and each of them is pretty wide (nvarchar(300) and above). This table provides the backing for key functionality, and users frequently search for authors or titles using LIKE and wildcards. We all know that's nasty - as it forces a scan.

These searches were actually causing a table-scan, which was throwing off excessive reads, and generating massive amounts of CPU utilization. To the point where you could literally SEE each of these queries show up in task manager - as the CPU spiked to handle the underlying scan. The image below helps highlight the severity of the problem - with each of the scanning queries represented by the spike you can see:


I threw an index against each of the columns, figuring that a search on something like '%JRR%Tolkien%' against such a big table would likely prefer to use an index scan instead of a table scan. Both of those indexes required 533MB of Disk to create. And, sure enough, SQL Server used those indexes as soon as they appeared, but it only cut the reads in about 1/4th of what they used to be, and still kept CPU utilization about where it had been before - meaning that you could still SEE these queries being executed by their tell-tale CPU spikes.

Enter Full Text Indexing
I figured that since Full Text Indexing actually tokenizes index data, instead of merely storing it in a B-Tree, that we'd end up with a much smaller index structure. I also hoped, that if we could get a smaller index to work from, we'd see a commensurate decrease in the amount of reads/churn going on on the server.

Creating a Full Text Index on 8M+ rows took a fair while. But I was happy to see that it was only 133MB (compared to the 533MB of the 'traditional' indexes). That made me hopeful that we'd at least see around a 1/4th reduction in the amount of reads.

Turns out I got way more than I had anticipated. The actual execution plan for the query using LIKE and % was coming in at a cost of around 27 or 28 on a consistent basis. It was also taking 5-7 seconds for each query. (Yeah, that puts it almost into the realm of making it a 'query from hell' - but definitely something you don't want people just firing off in an 'ad-hoc' fashion like they actually NEED to do with this query in question.)

With the full text index, the actual execution plan drops down to .27 - making it fully 100 times faster than its traditional counterpart. Execution time dropped down to 0 seconds (in Management Studio) as well. In past cases where I had used Full Text Indexing, I was only dealing with a table with less than 300K rows. Suffice it to say I was pretty tickled at the insane performance boost provided on an 8M row table with semi-wide domains (author and title) being indexed.

So, keep those insane performance benefits in mind the next time you're tempted to be a bit dismissive of the 'overhead' that Full Text Indexing might add to your environment.