Webcast: SQL Server Myths Debunked... Or Are They?
This post is a follow-up for my webcast today – focusing on myths, debunks, and the need (in some cases) to ‘re-bunk’ or re-examine aspects of common myths and their rebunks.
If you missed the presentation, you can register here and watch it for free.
A Note About the SQL Server Community
One unfortunate side effect of this webcast/presentation is that I initially was going to call it “Re-Bunking SQL Server Myths” – only using a made-up word like ‘re-bunk’ in the title of a Webcast can prevent you from reaching potential segments of your audience. So, as I mention in this webcast, I decided to change the title to “SQL Server Myths Debunked… Or Are They?” – which seemed to convey the same general idea.
Only, that led to some confusion where it looked like the purpose of this session was to attack Paul Randal (blog | twitter) which just wasn’t the case. Happily, he knew that a mob with pitchforks was bearing down on me, and was nice enough to warn me in advance.
I explained this a bit more fully in the webcast itself by way of a long-winded introduction, but thought that now might also be a good time to trot out an article that I wrote long ago for SQL Server Magazine, about how great the SQL Server Community is. (And I’m trotting this article out because it summarizes my thoughts about how open the community is – and showcases how I just couldn’t be comfortable attacking someone else to make myself look good – it’s just contrary to how I operate.)
Paul Randal’s Debunking Series
Otherwise, if you watch or watched this webcast, you’ll see that I held up Paul’s debunking series as kind of the ‘hallmark’ for how I think debunking should transpire. As I mentioned in the webcast, what I like about Paul’s approach (above and beyond just the huge number of myths that he’s tackled) is that he not only lays down the facts as part of his debunk, but he also does a great job of calling out the caveats or things you should also watch out when stumbling into ‘myth land’.
Links:
- Myths and misconceptions: 60 page PDF. (Highly rated. Download the PDF and put it on your smart phone/etc.)
- Misconceptions about … Everything (Same as previous link, but you can read each post online (and be sure to check out the comments)).
Shrinking Files
A big part of this presentation was on cases when regularly shrinking files could make sense. If that sounds craaaaazy (and it should), then go check out the webcast.
The key point of this part of the webcast though was: when you understand the costs, liabilities, and potential pitfalls of various SQL Server operations and techniques (such as shrinking files), then you’re actually in a better position to be able to evaluate the effectiveness of those operations as an OPTION in cases where you might not normally consider them.
Or, more specifically, because SQL Server is so flexible, you can actually run into plenty of cases where what is typically a WORST PRACTICE can become a tolerable or acceptable or even a BEST practice under certain scenarios – because knowing what, when, why, and how to use various features and capabilities is all about understanding the trade-offs.
Consequently, there ARE a couple of cases where regularly shrinking your data files can not only make sense – but become a best practice.
Table Variables
Another topic I covered was how there’s a myth out there that table variables are MAGICAL in that they only live in memory. That myth has been debunked a number of times – as it should have been.
Only, my ‘rebunk’ in this part of the session is that many debunks of this myth can almost leave people with the idea that there are really no benefits that come with using Table Variables over temp tables.
And that’s just not the case. In fact, as I said in the webcast, “Table Variables are ALWAYS a better option to use – unless they’re not”. Which means that Table Variables have less locking and transactional overhead, they also lead to fewer recompiles within sprocs, and they give the added benefit of making it easier to view Estimated/Actual Execution plans (because you’ll get an error when a sproc/etc has a temp table defined within it – but you won’t when using Table Variables).
Multiple Files
Sadly, I blew right past this slide in my presentation due to time constraints/considerations. But, that caused a number of people to ask about this one in the Q&A session.
So, here’s the short story: Sadly, there’s a myth that has cropped up about the use of multiple files as they apply to USER databases. Somehow, this myth appears to be related to the PSS recommendation for SQL Server 2000 that tempdb be configured/provisioned with a file per processor (or core) to help mitigate issues with allocation contention. (Paul Randal does a great job of debunking the core of this myth here.)
Only, the issue I wanted to address is how I’ve seen this myth ‘morph’ into cases where I’ve actually seen people try to do the SAME thing (i.e. one file per processor/core) for user databases. Likewise, I’ve also seen ‘anti-myths’ from the debunk of this myth where I’ve had people argue with me about using MULTIPLE files for performance reasons. (Or, in other words: I’ve had people tell me that using multiple files doesn’t provide any performance benefits UNLESS we’re talking about tempdb – and that’s just not true.)
So, the rebunk I wanted to focus on in this regard was the fact that using multiple files (or multiple FILEGROUPS) CAN be a way to boost performance – in cases where the files are on DIFFERENT physical spindles (i.e., disks, volumes, LUNs) and where the operation in question can already be parrallelized.
In fact, SQL Server Books Online calls this out as a way to boost performance. And the idea is that if you create a new FILEGROUP for say, non-clustered indexes, and put them on a different set of spindles than your clustered indexes (i.e., than your tables), then you can get perf boosts in the form of quicker lookups – in SOME cases. Or, another common adoption of this technique is to put heavily used tables (that are commonly used in heavy or repetitive JOINs) on a different FILEGROUP from your core tables. This, in turn, can allow for better parrallelization.
And, now that I think about it… I’ve actually blogged about this – and expect it to show up on Idera’s SQL Server Performance Blog in the next few days (or week) – so keep an eye out for more details there if you’re interested (or just shoot me an email).
The point though, is that a casualty of the myth surrounding the ‘tempdb’ myth is that people, somehow, occasionally get confused about the SCOPE of that single best practice, and lose sight of SQL Server internals – and how SQL Server really performs when it comes to SMP/Parallel operations – which is what I wanted to ‘re-bunk’.
AWE / Locking Pages in Memory
I probably should have just drawn up a matrix for this one.
AWE is good. Period. It allows SQL Server to access ‘locked’ memory in fast/efficient ways.
On 32-bit systems, AWE is required to be able to address more than 2GB of Physical RAM. (There are some other techniques that I see people use – such as PAE and the 3GB switches in the Boot.ini file – but I prefer to just use AWE as it’s cleaner/better/faster/easier.) For more details on AWE and 32-bit systems, see my article: Enabling SQL Server AWE Memory
That said, there are tons of myths out there about how AWE works, which platforms it’s required on and so fort – to the point that you’ll commonly see (in forums/etc) people say that AWE _MUST_ be configured on 64-bit systems as well. Happily, that’s just not true. Instead, 64-bit systems will let SQL Server access as much RAM as either the version/edition of SQL Server can handle or as much RAM as your Operating System (or server) will allow you. And it does all of this without the need for any additional configuration.
But remember how AWE ‘locks pages in memory’? Well, that can provide an extra performance boost – even on 64-bit systems. And there are really two reasons why. First, locked memory is less-volatile memory – the OS doesn’t have to ‘hover’ over this memory and keep tabs on who or what is using it – which reduces management overhead. Second, locked memory is accessed via AWE APIs – which were optimized for 32-bit machines – meaning that you’ll be picking up optimized overhead in the form of working with, effectively, smaller registers.
So, if you positively HAVE to squeeze every ounce of performance out of SQL Server, then locking pages in memory on 64-bit systems can make a lot of sense as a way to pick up an additional boost. (That said, I’m still going to recommend that you TEST performance before and after – to make sure you’re not bumping into an edge case where non-AWE might be faster ... because that can happen.)
Comments
Loading Comments...