Blogging Links

Just a quick heads up that in addition to occasionally blogging for Idera’s SQL Server Performance Blog (where I recently touched upon Quickly Validating SQL Server Configuration Settings and on Ensuring Proper SQL Server Connection Pooling) I’ve also started blogging with SQL Server Magazine as well.

Practical SQL Server Blog

My goal with my Practical SQL Server Blog is to focus on hands-on/practical topics that are geared mostly towards reluctant and junior-to mid-level DBAs and developers who want to know more about how to effectively leverage SQL Server.

I’ve already provided a number of posts there, including a post on the importance of SQL Server Recovery Time Objectives and Recovery Point Objectives and an introduction to SQLContinuitySIM – a free tool that I’ve made available on my SQL Server Consulting site to allow DBAs to easily test RPOs and RTOs when testing/evaluating their disaster recovery plans.

(And, no, I haven’t forgotten about SQL Server Videos. I’m currently working on a bit of a revamp of the site to make navigation a bit cleaner and easier, and I’ve already slated a number of new videos that I’ll be working on this year as well.)

Essential Skills for SQL Server DBAs: Session 3 Resources

For those of you that attended my 3 part eLearning Series on “Essential Skills for SQL Server DBAs” with WindowsIT/SQLMag, here’s a list of additional resources and links for things that I mentioned, alluded to, or wanted to provide additional information on from within session 3.


A key component of some of the things we talked about in this session in terms of performance (as well as in session 1 where we looked at a rough overview of how SQL Server works) was the way in which SQL Server has its own internal scheduler and resource management capabilities – commonly referred to as the SQLOS.

And, if this was something that you want to learn more about, then I recommend taking a peek at Brent Ozar’s free SQL MCM video that goes into the SQLOS in more depth.

Likewise, in terms of being able to interact with the SQLOS via Dynamic Management Objects, I also wanted to provide a link to Books Online which provides documentation of Execution Related Dynamic Management Views and Functions.

Statistics and Indexes

Otherwise, another key thing we addressed in this session was Indexes and Statistics – as these are both critical concerns to addressing SQL Server performance. To that end I wanted to provide a link to a great overview of how SQL Server 2005 uses statistics (even though this information is a BIT dated it’s still very valuable because it’s a good overview of how statistics work – and how you can interact with them).

I also wanted to provide a link to a great series of articles on indexes – that will walk you through the basics and on up into more advanced, how-to, topics for dealing with, interacting with, and managing indexes:’s Stairway to SQL Server Indexes. (And make sure to check out other stairways, these are FANTASTIC resources.)

Then, along the lines of indexes, I also wanted to provide a link to Books Online: Reorganizing and Rebuilding Indexes – as this provides great info about fragmentation and index management/maintenance.

Similarly, I also wanted to provide a link to Michelle Ufford’s spectacular Index Defrag scripts (which, you’ll remember, that I compared to ‘bacon-soaked' marshallows’ – by means of a metaphor comparing index management to fishing/lures).

Statistics Maintenance
In my presentation, I mentioned using sp_updatestats and covered that it does a decent job of updating most (not all) stats with very little effort. (Though just be careful if it’s never been run, as it will take a while). I also included a sample script (using the undocumented special procedure, sp_msForEachDb) that you can use to update stats across ALL of your databases on a single server. And, here’s that script again:

EXEC sp_msForEachDb @command1 = ‘EXEC ?..sp_update_stats’


Additional Performance Resources

One thing I didn't cover in this eLearning event was the role of vendors when it comes to SQL Server. And I skipped over that simply due to time constraints. That said, there are GOBS of great SQL Server tools and services available - from major SQL Server Tools vendors. And, as I opined a long time ago, one of the great things about these vendors is that they're all very keen to be integral parts of the SQL Server community - meaning that they all do a great job of trying to help educate SQL Server users and DBAs on various, core, concerns - instead of focusing solely on just selling products.

So, to that end, I wanted to do something a bit different and provide links to three additional resources that you can use to learn more about SQL Server Performance tuning - from three different vendors of 3rd party SQL Server tools/solutions.

Quest: SQL Server Perfmon Counters Poster
This poster is a great resource - one of the best aggregations of information, documentation, and guidance on how to interpret Windows and SQL Server Performance Counters.

Redgate: Mastering SQL Server Profiler (free eBook)
This free eBook is also a great resource - that provides outstanding insights and details on how to use SQL Server Profiler. (Note the link on this page to the free download, or you can purchase a 'hard' version of this book if you're interested.)

Idera: Essential Performance Tools for SQL Server DBAs
Hmmm. The title of this free whitepaper sounds a bit like the title for this session. And that's probably because both resources had the same author. So, to that end, this free whitepaper does rehash a number of the same things that we covered in this session. It does, however, provide a better overview of some of the tools and resources we looked at today as well.

What’s Next?

Finally, I wanted to close out this event (or my notes for it) by pointing to a link for a great blog post that I bumped into the other day:

The Power of T-SQL’s APPLY Operator

The content in this blog post isn’t what I’d consider being on par with most beginning or newer DBAs. BUT, this post is a fantastic example of just how fun and fulfilling it can be as a DBA to dig into SQL Server performance. That, and this blog post is also a great example of a seasoned veteran of SQL Server showcasing (indirectly) just how fun and exciting it can be to continue learning, discovering, and finding new challenges and insights when working with SQL Server.

Translation: This post is a fantastic example of how being a DBA can continue to be a fulfilling career choice over the years, because in this case, there’s no doubt that Rob Farley (who is a genuine bad-ass) is thoroughly in love with what he’s doing. And I can say the exact same thing about my career choice in terms of SQL Server – it’s been a blast, it’s still a blast, and I anticipate more challenges and opportunities in the future.

Essential Skills for SQL Server DBAs: Session 2 Resources

For those of you that attended my 3 part eLearning Series on “Essential Skills for SQL Server DBAs” with WindowsIT/SQLMag, here’s a list of additional resources and links for things that I mentioned, alluded to, or wanted to provide additional information on from within session 2.

Sizing Databases

During the first part of Session 2 we focused on Server Management (checking logs/etc) and on database management – where we focused on the perils of both shrinking databases and on letting them grow fragmented by letting them grow with the lame, default, sizing options found by default within SQL Server.

To that end, I wanted to share a link to Books Online that covers more details on: Managing Databases.

For more information on database sizing ideas and guidance, you may also want to check out this SQL Server Magazine Article: A Practical Approach to Managing Database Size.

Likewise, I also mentioned that the proper care and feeding of log files can play a role in performance concerns as well. And, to that end, I wrote about that a while back in my article: Maximize Storage Performance.

Additional Resources

During this session I also mentioned some specific resources that I would make available.

Specifically, I called out that I would provide a link to the 2.5 hours of free SQL Server Videos covering backups.

Likewise, I also mentioned that I’d provide a link to an overview of the benefits of RPOs and RTOs – as they apply to SQL Server.

And, I also mentioned that I’d provide a link to my upcoming, free, webcast: SQL Server Disaster Recovery and Prevention (where I’ll be covering problems and concerns around database corruption).

Essential Skills for SQL Server DBAs: Session 1 Resources

For those of you that attended my 3 part eLearning Series on “Essential Skills for SQL Server DBAs” with WindowsIT/SQLMag, here’s a list of additional resources and links for things that I mentioned, alluded to, or wanted to provide additional information on from within session 1.

It’s Good to Be the DBA

During this session, I mentioned some of the benefits of being the DBA, and quoted job and salary surveys from CNN/Money, and – showing that being a DBA can be a rewarding career, has a great outlook in terms of looking like DBAs are going to be around for a while, and salaries aren’t bad either.

I also called out that there were different KINDS of DBAs. Gobs of them. And quoted a list defined by Craig S. Mullins and Brad McGehee. (Brad’s list of different types of DBAs can be found in his applicable chapter in SQL Server MVP Deep Dives – a great book.)

Technical Details / Basics / Fundamentals

During the second part of this session, we took a look at core, or basic, SQL Server concepts that all DBAs should know about. And, after providing a VERY high-level overview of how SQL Server works as a proxy to manage access to your databases/files, we then touched upon how SQL Server uses, well, SQL Server databases, to meet its own needs. And, in that regard, we touched upon SQL Server’s system databases. So, to that end, I wanted to make sure that I provided a bit more depth or detail on some of those databases, and therefore have a couple of links to Books Online:

SQL Server’s System Databases

The Resource Database (I could tell you more, but I’d have to kill you… )

Likewise, the following two links can be great resources to check out if my overview of how SQL Server works was new or something you’re not terribly familiar with:

The SQL Server Database Engine

Managing SQL Server

And, finally, I also wanted to drop off a link basic documentation within SQL Server’s Books Online for Security as well:

SQL Server Security

And, as always, feel free to ping me if you have any questions that weren’t answered during the Q&A portion of this session.

SQL Server Remote Availability Notes

In my Session on SQL Server Remote Availability (you can still signup and watch the video or presentation up to a few months after the presentation was given), I mentioned that I’d provide some additional links and resources in the form of a blog post.

Here those resources and links are.

Remote Availability Basics

In my session on Remote Availability (RA), I covered how it was similar to High Availability – but much more akin to Disaster Recovery in terms of scope or purpose (even though RA typically uses or leverages many of the same tools and techniques as HA solutions).

To that end, I wanted to link to two great articles on Remote Availability – as typically discussed or implied within IT in general. Both come from the same site, TechRepublic and are by the same Author (Mike Talon):

Learn the Pros and Cons of Remote-Availability Systems
Don’t Rely Solely on Remote Availability

The Need for Remote Availability

In case anyone was wondering: Remote Availability IS something that every organization needs – to one degree or another. And that’s simply because it’s impossible to depend upon data in any regard and not have some sort of contingency for how to preserve or recover that data in the case of an outage – even in cases when the outage may be more than just the loss of a component or system – such as when when the outage is at the data-center level.

Michael Otey (blog) recently blogged about the Price of High Availability – and while he was specifically talking about the cost benefits of implementing expensive redundancy and failover processes, the same thing can be said of the cost required to establish some sort of remote availability. Meaning that the price of setting up Remote Availability Solutions may look expensive at first blush – but these costs are nothing in comparison with the COST you’d incur without some sort of viable contingency plan.

And, to that end, one thing that I mentioned in my presentation is that you can get a quick sense for how much it would cost you to lose your data center by talking with the Financial folks in your organization to get a sense for what monthly revenues are. Then just break those values down by day, hour, and minute as needed to get a VERY ROUGH sense of how much an outage would cost you per hour and so on. And with that overly-simplified approach to estimating the cost of potential down-time, you can then get a feel for what kinds of budgets you may need to start looking at in order to defray the costs of actually LOSING operational capacity for any significant stretch of time.

And, once you have a sense for the potential business costs for outages and what kinds of potential budget you might have, then you’re ready to start dealing with RTOs and SLAs.

Recovery Time Objectives (RTOs) and Service Level Agreements (SLAs)

In my estimation tons of Small-to-Medium (SMBs) still run IT departments (or at least SQL Server deployments) without any form of SLAs in place. In my presentation on Remote Availability I covered how one of the BIG ways in which RA differs from HA is in the sense that RA must include much better interaction with management if RA solutions are going to be successful. That’s NOT to imply that HA or DR solutions don’t need communication with management. But, in my experience, even in organizations where there are no SLAs or RTOs, HA and DR are typically addressed (dutifully) by the IT team and ‘implicitly’ understood to be in place, to some degree, by management. Again, I don’t think that’s a recipe for success. I’m just saying that when it comes to address off-site contingencies for disaster recovery (i.e., Remote Availability solutions), you simply can NOT succeed in any way, shape, or form unless management is a willing partner right out of the gate.

Paul Randal (blog | twitter) has done a great job of highlighting the benefit of SLAs – and his expression of how the existence of SLAs show that IT and management are working in unison is a perfect example of what I’m trying to communicate in this regard – especially when it comes to RA solutions.

Likewise, when it comes to RTOs, the big thing to remember is that the key term there is ‘Objective’. Meaning that RTOs are based upon stated objectives – and therefore serve as the basis of helping define SLAs (by serving as core components in establishing MTTRs). Consequently, if you’d like a decent (high-level) overview of what RTOs mean (decoupled from all the business-speak), then I’d recommend the following article by Pierre Dorion as a decent place to start:

How to Establish a Recovery Time Objective (RTO)

SQL Server and Remote Availability

I only spent about 10 minutes or so of my presentation specifically covering how to address RA solutions via SQL Server. (The first 10 minutes or so were on the ‘basics’ and importance of RA, the ‘middle’ 10 minutes were on options and techniques to use with SQL Server to achieve RA solutions, and the ‘last’ 10 minutes or so were on best practices, pitfalls, and things to consider (based on practical experience) when it comes to setting up RA solutions.)

To that end though there were a couple of things I wanted to provide links for.

For starters, there’s the licensing question. If there’s one thing that I typically do NOT touch during presentations it’s Licensing – because that can quickly get ugly in a hurry. However, given the special/awesome nature of how SQL Server supports warm failover servers via licensing, I wanted to make sure that was addressed. To that end I’ve also linked to a Microsoft FAQ that covers this in a bit more detail. And actually, since the FAQ doesn’t provide good anchors, I’m just copy/pasting the text from that FAQ in my blog. (So, if you find this anything after oh, say, 3 days after this post was published, you’ll want to verify that this policy hasn’t changed.)

At any rate, here’s the relevant snippet:

Q. How does licensing work for computers that run SQL Server 2005 in failover scenarios?

A.Failover support, where servers are clustered together and set to pick up processing duties if one computer should fail, is now available in Standard and Enterprise editions of SQL Server 2005. Under each of these editions, keeping a passive server for failover purposes does not require a license as long as the passive server has the same or fewer processors than the active server (under the per processor scenario). For details on which failover methods are available under each edition, visit the SQL Server 2005 Features Comparison page.

More info on licensing can be found here as well.

Otherwise, in my presentation on RA I recommended AGAINST using Replication and Mirroring as options for Remote Availability – given their difficulty and some of the overhead involved with managing these solutions. That said, there are ways in which you might want to address how Replication and Log shipping interact if you’re trying to achieve RA on a Replicated solution by adding Log Shipping into the mix. To that end:

Books Online – Interoperability and Coexistence (of Replication and Log Shipping/Mirroring)

Similarly, even though I don’t think that Mirroring is a good idea (given complexity), if there’s one thing that’s true about SQL Server, it’s that you can find a way to do just about anything. To that end, here’s an article on ‘distance’ mirroring:

Implementing Remote Mirroring and Stretch Clustering

And, with stretch clustering in mind, here are some additional links as well on that topic as well.

Multisite Clustering
Multi-Site Failover Clusters
Additional Resources here and here (note the links to whitepapers/etc at bottom of post)

Otherwise, Log Shipping is actually pretty tame/easy to set up. (To the point where I really need to do a video series on it – to cover the ins/outs, pros/cons, and the step-by-step how-to details of how to set it up and configure/monitor/etc it.)

Confusing Increased Availability with Full Coverage from Disasters

Sadly, I just didn’t have enough time in my presentation to call out something that I like to address when ever talking about High Availability – which is that it’s essential that DBAs and IT Pros don’t confuse High Availability Solutions with Disaster Recovery. Because RA is really just a form of ‘stretch’ or ‘remote’ Disaster Recovery, this concept doesn’t need as much coverage as it normally would when discussing High Availability. None-the-less, if you’ve implemented a Remote Availability Solution, don’t make the mistake of thinking that it means you’re covered. Data Corruption and Loss can come from different sources. RA solutions are essentially geared towards dealing with SYSTEM failure or outages – especially at the site level. But even the best RA solution won’t do you any good if the corruption of your data was caused by a software glitch within one of your applications or if all of your Order Details records were deleted by an end-users ‘by accident’ (or even maliciously).

Consequently, defense in depth is the approach you need to take if your data is important. And I’ve outlined some additional thoughts on ways to protect your data in my article for SQL Server Magazine: Confusing High Availability with Disaster Preparedness.


UPDATE: Woops. I meant to include attribution for some of the photos used in my slides. Rather than cramming in attributions next to each photo in the slide (where no one could really read them), I wanted to list them in my follow-up post. Only, I totally forgot to do that yesterday when I pushed the post. So, without further ado (and to make sure I’m not a full-on cretin), here they are (and thanks to the great photogs who were willing to share these photos for use by hosers like me):

Slide / Photo Attributions

Slide 3 – Session Overview
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide  6 – Do You Really Need RA?
Slide 8 – Primary RA Considerations (Juggling Swords and Chainsaws)
Slide 9 – Constraints: Infrastructure (Bing’s Data center of all places)
Slide 10 – Bandwidth Constraints
Slide 18 – Practical Advice (I’ve actually been in/around Aix-en-Provence)

SQL Server Remote Availability

Ever wonder what your organization would do if you lost an entire data center? Do you already have plans in place to migrate operations to another location?

Later this month I’ll be presenting on SQL Server Remote Availability – how it differs from High Availability, and how to determine whether it’s something your organization needs or not.

Content from this session will primarily be based upon real-world problems and solutions pulled from experiences with my SQL Server consulting clients.

Here’s a copy of the Event’s Abstract:

SQL Server Remote Availability: Problems and Solutions that Work
Are you confident that your data is as protected as it needs to be in order to ensure business continuity even in the face of a disaster? Many organizations today are focused on ensuring that mission-critical data meets stringent service level agreements to ensure high-availability in the case of system or hardware failures. But have you done enough to ensure business vitality in the case of a system-wide failure or the loss of an entire data center?

Join independent SQL Server expert Michael K. Campbell for an informative review and overview of how Remote Availability (RA) differs from Disaster Recovery (DR) and High Availability (HA) solutions. By attending this seminar you'll learn about evaluating the degree of protection that your data needs, and gain valuable insights into how to set up, manage, and test your own RA solutions. You'll also learn about common pitfalls, considerations, and overall best-practices for helping ensure business continuity in the face of potential disasters.

The event is organized by Windows IT Pro / SQL Server Magazine, and is sponsored by Vision Solutions.

It’s FREE and takes place on February 23rd.

Signup / Details

Hope to see you there.

Recent Links and Resources

Despite a lull of activity on this blog during the past few months, I was actually fairly busy. (And I’m also formulating some actual plans to create new videos here in the next few months.)

SQL Server Performance Resources

During December and January I provided three ‘guest’ posts for the SQL Server Performance Blog managed by idera. The posts were:

Using Multiple Files
The benefits of using multiple files is one thing that constantly causes some confusion among DBAs – largely because of a couple of bits of bad advice and some myths that have made their way into the collective SQL Server consciousness. That said, leveraging multiple data files can be a great way to pick up performance improvements – and this post outlines some of those use-cases along with potential pitfalls. [link]

Don’t Forget Application Caching
Caching continues to be one of the best ways to improve performance. And while application caching is typically outside the realm of what many SQL Server Performance tips focus on, failure to address caching opportunities at the application level is a great way miss out on huge potential performance improvements. [link]

Removing Unused Indexes
How many indexes are too many? When should you remove an index? Those are all tough questions that require a lot of  different considerations. I’ve outlined many of them in this post – but not all (check out, for example, the 2nd comment on this post that links to an article by Rob Farley touching on how DMVs might provide erroneous information.) [link]

White Paper: Essential Performance Tools for SQL Server DBAs
During December I also authored a white paper for idera (that was published in January) – designed to to help new/reluctant DBAs get started with SQL Server Performance tuning by exposing them to some of the key tools available for performance monitoring and tuning used by the pros.

This paper can be found on idera’s Resources page (in the Whitepapers section, entitled: Essential Performance Tools for SQL Server DBAs).

Expert SQL Server Recommendations for the New Year

The new year is already well under way, but if you missed Megan’s blog post on recommendations from SQL Server Experts, you’ll want to check out some of the advice offered by various SQL Server Editors and Authors listed on her post. Listed is some great advice on database administration, BI/DW tips, insights on what to do about the cloud, and lots of similar advice on the importance of making sure that your data is accurately backed up.

And, if you’re not sure about your backups, then you really need to GET sure about them. And if you need a place to start, I would (obviously) recommend a series of great, free, videos that can help you get started with the core concepts, considerations, best practices, and even step-by-step details of how to get going with your own SQL Server Backups.

Webcast: SQL Server Myths Debunked... Or Are They?

This post is a follow-up for my webcast today – focusing on myths, debunks, and the need (in some cases) to ‘re-bunk’ or re-examine aspects of common myths and their rebunks.

If you missed the presentation, you can register here and watch it for free.

A Note About the SQL Server Community

One unfortunate side effect of this webcast/presentation is that I initially was going to call it “Re-Bunking SQL Server Myths” – only using a made-up word like ‘re-bunk’ in the title of a Webcast can prevent you from reaching potential segments of your audience. So, as I mention in this webcast, I decided to change the title to “SQL Server Myths Debunked… Or Are They?” – which seemed to convey the same general idea.

Only, that led to some confusion where it looked like the purpose of this session was to attack Paul Randal (blog | twitter) which just wasn’t the case. Happily, he knew that a mob with pitchforks was bearing down on me, and was nice enough to warn me in advance.

I explained this a bit more fully in the webcast itself by way of a long-winded introduction, but thought that now might also be a good time to trot out an article that I wrote long ago for SQL Server Magazine, about how great the SQL Server Community is. (And I’m trotting this article out because it summarizes my thoughts about how open the community is – and showcases how I just couldn’t be comfortable attacking someone else to make myself look good – it’s just contrary to how I operate.)

Paul Randal’s Debunking Series

Otherwise, if you watch or watched this webcast, you’ll see that I held up Paul’s debunking series as kind of the ‘hallmark’ for how I think debunking should transpire. As I mentioned in the webcast, what I like about Paul’s approach (above and beyond just the huge number of myths that he’s tackled) is that he not only lays down the facts as part of his debunk, but he also does a great job of calling out the caveats or things you should also watch out when stumbling into ‘myth land’.

- Myths and misconceptions: 60 page PDF. (Highly rated. Download the PDF and put it on your smart phone/etc.)
- Misconceptions about … Everything (Same as previous link, but you can read each post online (and be sure to check out the comments)).

Shrinking Files

A big part of this presentation was on cases when regularly shrinking files could make sense. If that sounds craaaaazy (and it should), then go check out the webcast.

The key point of this part of the webcast though was: when you understand the costs, liabilities, and potential pitfalls of various SQL Server operations and techniques (such as shrinking files), then you’re actually in a better position to be able to evaluate the effectiveness of those operations as an OPTION in cases where you might not normally consider them.

Or, more specifically, because SQL Server is so flexible, you can actually run into plenty of cases where what is typically a WORST PRACTICE can become a tolerable or acceptable or even a BEST practice under certain scenarios – because knowing what, when, why, and how to use various features and capabilities is all about understanding the trade-offs.

Consequently, there ARE a couple of cases where regularly shrinking your data files can not only make sense – but become a best practice.

Table Variables

Another topic I covered was how there’s a myth out there that table variables are MAGICAL in that they only live in memory. That myth has been debunked a number of times – as it should have been.

Only, my ‘rebunk’ in this part of the session is that many debunks of this myth can almost leave people with the idea that there are really no benefits that come with using Table Variables over temp tables.

And that’s just not the case. In fact, as I said in the webcast, “Table Variables are ALWAYS a better option to use – unless they’re not”. Which means that Table Variables have less locking and transactional overhead, they also lead to fewer recompiles within sprocs, and they give the added benefit of making it easier to view Estimated/Actual Execution plans (because you’ll get an error when a sproc/etc has a temp table defined within it – but you won’t when using Table Variables).

Multiple Files

Sadly, I blew right past this slide in my presentation due to time constraints/considerations. But, that caused a number of people to ask about this one in the Q&A session.

So, here’s the short story: Sadly, there’s a myth that has cropped up about the use of multiple files as they apply to USER databases. Somehow, this myth appears to be related to the PSS recommendation for SQL Server 2000 that tempdb be configured/provisioned with a file per processor (or core) to help mitigate issues with allocation contention. (Paul Randal does a great job of debunking the core of this myth here.)

Only, the issue I wanted to address is how I’ve seen this myth ‘morph’ into cases where I’ve actually seen people try to do the SAME thing (i.e. one file per processor/core) for user databases. Likewise, I’ve also seen ‘anti-myths’ from the debunk of this myth where I’ve had people argue with me about using MULTIPLE files for performance reasons. (Or, in other words: I’ve had people tell me that using multiple files doesn’t provide any performance benefits UNLESS we’re talking about tempdb – and that’s just not true.)

So, the rebunk I wanted to focus on in this regard was the fact that using multiple files (or multiple FILEGROUPS) CAN be a way to boost performance – in cases where the files are on DIFFERENT physical spindles (i.e., disks, volumes, LUNs) and where the operation in question can already be parrallelized.

In fact, SQL Server Books Online calls this out as a way to boost performance. And the idea is that if you create a new FILEGROUP for say, non-clustered indexes, and put them on a different set of spindles than your clustered indexes (i.e., than your tables), then you can get perf boosts in the form of quicker lookups – in SOME cases. Or, another common adoption of this technique is to put heavily used tables (that are commonly used in heavy or repetitive JOINs) on a different FILEGROUP from your core tables. This, in turn, can allow for better parrallelization.

And, now that I think about it… I’ve actually blogged about this – and expect it to show up on Idera’s SQL Server Performance Blog in the next few days (or week) – so keep an eye out for more details there if you’re interested (or just shoot me an email).

The point though, is that a casualty of the myth surrounding the ‘tempdb’ myth is that people, somehow, occasionally get confused about the SCOPE of that single best practice, and lose sight of SQL Server internals – and how SQL Server really performs when it comes to SMP/Parallel operations – which is what I wanted to ‘re-bunk’.

AWE / Locking Pages in Memory

I probably should have just drawn up a matrix for this one.

AWE is good. Period. It allows SQL Server to access ‘locked’ memory in fast/efficient ways.

On 32-bit systems, AWE is required to be able to address more than 2GB of Physical RAM. (There are some other techniques that I see people use – such as PAE and the 3GB switches in the Boot.ini file – but I prefer to just use AWE as it’s cleaner/better/faster/easier.) For more details on AWE and 32-bit systems, see my article: Enabling SQL Server AWE Memory

That said, there are tons of myths out there about how AWE works, which platforms it’s required on and so fort – to the point that you’ll commonly see (in forums/etc) people say that AWE _MUST_ be configured on 64-bit systems as well. Happily, that’s just not true. Instead, 64-bit systems will let SQL Server access as much RAM as either the version/edition of SQL Server can handle or as much RAM as your Operating System (or server) will allow you. And it does all of this without the need for any additional configuration.

But remember how AWE ‘locks pages in memory’? Well, that can provide an extra performance boost – even on 64-bit systems. And there are really two reasons why. First, locked memory is less-volatile memory – the OS doesn’t have to ‘hover’ over this memory and keep tabs on who or what is using it – which reduces management overhead. Second, locked memory is accessed via AWE APIs – which were optimized for 32-bit machines – meaning that you’ll be picking up optimized overhead in the form of working with, effectively, smaller registers.

So, if you positively HAVE to squeeze every ounce of performance out of SQL Server, then locking pages in memory on 64-bit systems can make a lot of sense as a way to pick up an additional boost. (That said, I’m still going to recommend that you TEST performance before and after – to make sure you’re not bumping into an edge case where non-AWE might be faster ... because that can happen.)

DBCC TRACEON – Follow-up

In my recent post on using DBCC TRACEON() in conjunction with setting startup parameters, I pointed out that you can frequently avoid the need to restart a SQL Server immediately by specifying the startup flag/parameter necessary and then just executing DBCC TRACEON(####, -1) with the same parameter.

In most cases I’ve found that that works very well.

When DBCC TRACEON() Doesn’t Work

In some cases though, you’ll see that when you take such an approach, that the trace flags that you set will occasionally ‘not-take’ or won’t ‘stick’ after you close your connection/etc. (Or, the same thing goes with DBCC TRACEOFF() – where you try to remove a trace flag – and it doesn’t ‘stick’.)

First and foremost, if you see that your changes aren’t sticking, then make sure you executed DBCC TRACEOFF/TRACEON with the –1 second argument – which forces the statement global. (I didn’t do as good of a job as I should have with my previous post.) And if you’re sure you did that, then you may have run into one of those use-cases where DBCC TRACEON/TRACEOFF isn’t working as you would expect.

Sadly, this behavior is sort of by design. And, honestly, given how complex the code is that must be putting these trace flags into play, it’s not too terribly surprising that trace-flags can be a bit difficult to make ‘stick’ on a busy production system (which is mostly where I’ve noticed problems with trace flags not sticking past the end of a connection/etc.).

More importantly though a key thing to note from Books Online when it comes to DBCC TRACEON() is in the Remarks section, where BOL states that “to avoid unpredictable behavior”, you need to enable trace flags in one of two ways: Either by

a) Using startup parameters with the –T flag as I outlined in my previous post, or

b) Running DBCC TRACEON(####,-1) when nothing else is running queries.

It’s that second part that’s tough – as you’re rarely going to find that on a busy production server.

In my experience, I can usually get a trace flag to ‘stick’ on busy servers. But not always.

So, the take-away is that if you’re going to use the approach that I outlined in my previous post, you’ll want to make sure you keep an eye on things – and if the trace flag you’re putting into place is something that’s mission critical or could have serious impact were it to ‘captian kirk’ in and out of existence… then you’ll want to watch out for “unpredictable behavior” and only start those trace flags off with a service restart.

The Dark Art of Trace Flags

Truth is, Trace Flags are NOT really a dark art. But one thing I’ve noticed about the web’s ‘coverage’ of Trace Flags is that there are very few demonstrations of how to set Trace Flags that explain the process entirely, or in ‘idiot-proof’ steps.

Trace Flags in Review

If you’re not familiar with SQL Server Trace Flags, the the simplest way to define them is to say that they’re optional arguments, switches, or parameters that you can apply to SQL Server to make behavioral changes to the way the engine, certain features, or operations, act and behave.

Official documentation for SQL Server Trace Flags can be found in Books Online.

Sharp StickWhat that documentation fails to cover however really comes down to two things:

1) The use of Trace Flags is only something you should undertake with great care.  You can easily destroy a production system by playing around with them. In other words, they’re a sharp stick – and sharp sticks are great for some things. They can also poke your eye out if you’re not careful.

2) It doesn’t really describe HOW you go about adding a Trace Flag during startup – though it mentions that as an option.

Trace Flag Scope

By default, most trace flags that you set using DBCC TRACEON() will stay operation, or within scope, until the server reboots or you run DBCC TRACEOFF().

Therefore, if you’d always like a Trace Flag to be on, you’ll need to set it up as a startup parameter – meaning that when the SQL Server Service starts up, it will then apply the flag in question as part of the current operating or working environment. (Of course, you can still turn off most Trace Flags that were initialized at startup by just using DBCC TRACEOFF() if you need to temporarily turn them off, or if you’d like to turn them off and don’t want to restart SQL Server. More on that in a second.)

Setting Trace Flags as a Startup Option

To be fair, Books Online does detail how to set trace flags. (And maybe that’s why I’ve not found this covered too much on other sites/blogs when I’ve looked for instructions to send clients on how to make these changes.)

Specifics for changing/setting startup options are found here.

And there’s even a link to more specific instructions on configuring startup options as well.

Ironically, I didn’t know of either of these two links until I started writing this post. Which makes me a bit lame.

So, I’ll redeem myself a bit by providing some screen shots, and a concrete example.

Suppose, for example, you’ve got a system where you want to suppress successful backup messages. In many environments this wouldn’t make sense. But if you’ve got a multi-tenant server where there are lots of databases (say 5-20) and you’re doing regular log file backups every 15 minutes or so in keeping with best practices, then you’ll be spamming huge numbers of success messages into your Windows Event Logs and SQL Server Logs. As such, rather than letting those messages turn into noise that might distract you from noticing problems/errors, TRACE FLAG 3226 starts to make sense.

If you just want to temporarily suppress those messages, you can fire off DBCC TRACEON(3226) and those messages are suppressed – until the server reboots. And then they’ll be back.

If you want to make that flag/option ‘stick’ across reboots, then you need to add it as a startup parameter.

1. Fire up SQL Server Configuration Manager by going to Start > Programs > Microsoft SQL Server (2005/2008) > Configuration Tools > SQL Server Configuration Manager.


2. Select SQL Server Services, then right click on SQL Server (MSSQLSERVER) and select the Properties menu option.


3. In the Properties window, click on the Advanced tab, and then find the Startup Parameters option. Click the little down-arrow next to it and you’ll have more area to work in.


4. This is where you add any startup parameters you want to add – including trace flags.

In this case, since I want to add Trace Flag 3226, here’s what the startup parameters would look like.



Note that to add the Trace Flag, I had to ‘terminate’ the previous options with the addition of a semi-colon. Then I was able to add the –T flag and the desired trace flag. 

As per Books Online, make sure you use the UPPER CAST “T” to set trace flags. Lower-case “t” will work – but implies that you’re setting flags allowed by SQL Server Support Engineers – which implies lots of ugly potential that you do NOT want.

Likewise, if you want/need to add multiple trace flags, then you just keep appending them like so:
;-T3226; –Txxxx; –Txxxx

Again though, just remember that Trace Flags can be a lot like juggling flaming, loaded, sawed-off, shotguns.

And, of course, in order to make your startup options ‘stick’, you’ll have to restart SQL Server. You can do this immediately by stopping/starting the service as per the screenshot below (and if you’re prompted to stop SQL Server Agent – go ahead and do that; just remember to RESTART it after you bring SQL Server back online):


Best Practices For Turning Trace Flags On/Off

Restarting your SQL Server just because you want/need to put a Trace Flag into play long term can often be a pain. (The same goes for cases where you’ve got a trace flag installed, and want to remove it.)

Therefore, what I’ve found that works really well when I’ve made sure that I want to set a trace-flag long term is to:

a) Go in and Modify the Startup Parameters with the needed trace-flag as specified above.

b) I don’t restart the SQL Server Service.

c) Instead, I use DBCC TRACEON(####, -1) – where #### is the id of the Trace Flag that I set in step 1 and –1 is the switch/option that pushes this trace flag global.

This process works well as it kicks the Trace Flag into play immediately, and the next time the server is normally rebooted (i.e. service packs, hardware swap-out/swap-up, whatever) the startup parameters will kick in and the trace flag will ‘persist’ in scope across restarts and go into active service.

The same approach works when I want/need to remove a trace flag as well. I just nuke it using DBCC TRACEOFF(####, -1), and then remove the corresponding startup parameters.

UPDATE: Thanks to Reader/Visitor “Developer Seer” who pointed out that I forgot to put the –1 switch into my example in step c above. This is the key/trick to getting your Trace Flags to become GLOBAL in scope – or to make them persistent.