With the SQL Server consolidation whitepaper a bit delayed, I wanted to touch on a hotbed topic I’ve talked about in other areas and ways. Since it’s my blog, I can address this topic in a different way than a formal whitepaper.

Brent Ozar also blogged about some of this recently, and it looks like we’re on the same page! Anyway …

SQL Server Consolidation – The Quick Skinny
Consolidation is the big buzzword in IT now, and in the SQL Server world, it’s largely related to SQL Server sprawl. You know, those 10s, 100s, or (gasp!) 1000s of instances and/or databases you’ve got floating around all on their own servers with poor utilization (and there will be the extremes: servers sitting near or at 100%, and others under 10%), all of which are costing the business a lot of money hanging around. I’m not even going to touch all of the legacy stuff which probably should be upgraded and is long out of any kind of support.

What I’m seeing drive a lot of consolidation talk is the prevalence of virtualization coming from elsewhere in an organization. To many virtualization = consolidation, but they couldn’t be more wrong. Virtualization is only one potential way to do consolidation. Remember that you can consolidate using fewer instances each with multiple databases. Technically you could also consolidate with databases (i.e. have multiple apps in a single DB, but I don’t know any DBA who would want one DB with many different schemas and apps; talk about a perf tuning nightmare!).

So What’s The Problem? Virtualization is Great!
It is. In fact, I was a pretty early adopter. In the time of dinosaurs when I was at Microsoft as a full time employee and before Microsoft purchased Connectix (the company who made VirtualPC), MS used VMWare for some demos. That’s where I first started using VMware Workstation on an old Compaq Armada M700. I really love virtualization for what it has allowed me to do: show clustering and other advanced multi-server SQL things without schlepping an army of servers. The technology still has its limitations (for example, I can’t run a 64-bit virtual machine Windows Server 2008 R2 with Hyper-V as a host for other guests; I can install 64-bit W2K8 R2 and do other things), but is so useful I couldn’t do without it. The virtualization space, especially for Windows, has grown up a lot in these 10 years.

Now, if you believe all of the marketing hype from any of the virtualization vendors, virtualization is the cure-all for all of your problems. IT (not DBA) has been adopting virtualization more and more as an easy way to deploy a server; no longer do you need new hardware. A few mouse clicks and you’ve got a new deployment! And that is a good thing … provided it meets a bunch of other needs, too (including cost which from a manager’s perspective at the top, is the reason to consolidate). DBAs have had a harder and harder time fighting the tide of virtualization. There’s still a place for physical, but there are very good cases for virutalizing SQL Server servers (including the OS) under a hypervisor. There shouldn’t be a one-size-fits-all approach but IT is trying to wag that stick.

How does a DBA approach this challenge?

Let’s Talk Performance
I’ll be the first to say that performance was an issue in the past, but that has pretty much been eliminated with the newest generation of hypervisors. With Windows Server 2008 R2, Datacenter Edition supports up to 64 logical processors, so scaling up a large server to have multiple guests is not impossible. Add to that the support for up to 1TB of memory, and well, the math speaks for itself. Older hypervisors had limited scalability for each virtual machine (aka VM aka guest). Therefore, for any reasonably sized SQL Server implementation that required some horespower, virtualization didn’t make sense. You were looking at maybe 2 or 4 “processors” (and let’s face it, when you’ve got 20 VMs on a 32 proc hypervisor, none of them gets 4 real processors, but to the guest, it looks like 4). The only thing you could generally guarantee would be a fixed amount of memory and possibly decent disk I/O if you used real (not virtual) disks, but then again, you’re sharing the underlying hypervisor’s I/O disk subsystem/transport (HBA, iSCSI, etc.), so again, if you’ve got 20 VMs, there could be some contention. You can’t escape good disk architectures and SQL Server disk best practices by virualizing servers. Deploying a virtual needs to go through the same performance scrutiny a physical implementation would go through.

Virtualization still has some hurdles to cross before it’s truly transparent. When I was in the Microsoft Technology Center in Waltham recently, we were using Windows Server 2008 R2 and Hyper-V. If you wanted to change the configuration of a VM such as the amount of memory or virtual processors, you needed to power down the VM. Someone please correct me if I’m wrong and maybe we did something wrong, but this to me invalidates great features like hot add memory and processors which you can do on a physical, not virtual, implementation of SQL Server. Things like this should influence a physical vs. virtual implemention.

What Are You Really Saving By Virtualizing?
Remember that with each virtual machine (aka VM aka guest), you are effectively spinning up a full fledged server that needs to be managed in the same way a physical machine would be. Taking a current physical server and making it a virtual only saves the physical room and costs associated with that original server; I cannot be more clear about this point. If you virtualize every server, from an admin perspective, you are no better off than you were before. Management may be happy, but you have all of the same problems. Compounding this, if every new deployment now becomes virtual, you will now have virtual sprawl on top of SQL sprawl. Yay!

I mean, wasn’t the goal to reduce overhead and cost in all ways? The bean counters may win, but all of IT (including DBAs) may lose the war in a one-size-fits-all virtualization/consolidation strategy.

Another hidden pothole is licensing. Let’s not even bring up the fact whether or not you were 100% legal before consolidation and/or virtualization. What you need to worry about is the post-effects of consolidation and virtualization. If you’re a DBA, you’re probably struggling with your day-to-day job and cost is the last thing on your mind, but realize it’s important to everyone higher up the chain. Keep that in mind as you move up the chain in your company, since you will need to merge the technical and non-technical sides of the proverbial house. So when you choose a consolidation architecture, remember to think of those above you.

Cost, technical architecture, and licensing meet head on – you can’t avoid it. Microsoft is pretty specific about how to license when you have VMs, and although on the surface it may seem expensive, deploying an OS such as Windows Server 2008 Datacenter Edition allows you to deploy as many VMs as you want (256, or until you run out of resources) without having to license any of the Windows OSes under the hypervisor. Windows Server 2008 Enterprise Edition allows up to 4 before you start paying per VM. You do the math if you now have 100 virtuals and an EE license. SQL Server’s story is not dissimilar; purchase a SQL Server 2008 Enterprise Edition license for the hypervisor and deploy as much SQL Server as you want on that physical box.

High Availability and Disaster Recovery
Once you move from a physical to a virtual world, everything changes. Whereas you may have used failover clustering in the past, maybe it isn’t the right thing to do. Sure, SQL Server 2008 supports failover clustering of VMs, but if they’re both hosted under the same hypervisor, what scenario are you protecting against? Chances are if you are using virtualization, you’re looking at vendor specific tools such as Vmotion and Live Migration. I’ve tested Live Migration and it’s fantastic. However, know its limitations. Features like Live Migration allow you to move a VM from one host to another with minimal to no interruption to those using the VM, and you can do things like patch that underlying hypervisor. However, if you need to patch the OS of the VM – you still need traditional HA and DR methods for SQL Server. So now you need to think on two levels.

Virtualization also does not preclude you from doing all of the normal maintenace including database backups even if somehow you’re backing up a VM. If stuff hits the fan, and you need to go back from proverbial bare metal, you better have everything ready to go.

Support of a Virtual Environment
One thing I wanted point out is that for quite awhile, there was some contention between MS and the other virtualization vendors around getting support for the VMs. That’s all changed. At this point, as long as the hypervisor platform you use has been listed as part of the Server Virtualization Validation Platform, you’re good (also see KB897615). Supportability should be a concern; do not take it lightly.

Everything you currently do for a physical server you would do for a virtual – especially monitoring. The problem is that for a DBA, it’s not always easy to tell if you are running on a virtualized server if all you are doing is using Management Studio or RDP to access the server itself. If you’re trying to troubleshoot a performance problem, you need to know if you’ve got four real processors or virtual ones, and if you’re using real disks or virtual disks. I think you see where I’m going. It’s not impossible to troubleshoot a virtual environment, but a DBA needs to adapt his or her skills.

So Are You Saying Not To Virtualize SQL Server Deployments?
NO! Not at all! There just may be some scenarios where virtualization fits like a glove, and other cases where physical hardware and the use of an instance (or multiple instances) makes much more sense. The lines are much more grey at this point; they are not black and white as they used to be.

Some great examples of virtualization:
* Development and test environments
* Older servers which cannot be upgraded and/or have applications which need a specific configuration, but you want to lose the physical box so you P2V it
* Production use as long as it fits the right cost, performance, and availability needs … and has been thoroughly TESTED
* Demo servers
* Prototype/proof of concept environments
* Training labs/education

The Bottom Line
When I work with customers on forming a consolidation strategy, it generally involves both physical and virutal aspects. DBAs need to learn to talk the lingo and meet IT half way here; you’re most likely not going to get physical servers for everything. Pick and choose your battles and meet IT halfway to leverage the company’s investment in virtualization. If you don’t, you’re going to be in for a long fight. I’m not saying to never stick up for yourselves and fight for physical environments; quite the opposite. Know where it’s appropriate (including all of the tradeoffs by choosing one thing over another) and go from there.

Some of that marketing hype is real, believe it or not. Virtualization is good, but those of you out there who have been in IT long enough know there’s no magic technology that will solve all of your problems. Every solution is a hybrid of different things coming together.

From a DBA perspective, what you are striving for post-consolidaiton is a utility model where if someone wants to deploy a new application, whether it’s spinning up a virtual machine or adding a database (or three) to an existing instance, you have the structure and guidelines in place for that to happen.