It’s been a busy month between client work, travel, playing bi-weekly with a big band, and the holidays … as well as getting ready to go into the studio to start my first new album in 10 years in a week! First, the updates:

  • Thanks to the folks at SQL Saturday in Redmond – I had a lot of fun and the audience was great. Even though I was the last session of the day, people still stuck around.
  • The consolidation paper is finally in edit, and we’re on track to have it out for PASS assuming no road bumps.
  • It looks like I’ll be speaking in Singapore in December … more info as it gets solidified.
  • Hard to believe PASS is less than a month away! Hope to see some of you in my session.
  • I’ll get back to the promised post on DTC very soon.
  • Ben DeBow and I will be doing a 6 part series on consolidation for Penton media early next year (similar to the six I did earlier this year for SQL DBAs). Again, stay tuned for more details. It will be a lot of fun to work with Ben on this, as he also has some really great experience with some large customers who have done consolidation. Maybe if we’re crazy enough we’ll attempt to write a book, but don’t hold your breath 🙂

Now for some technical content …

Hyper-V, Virtualization, and Live Migration
I’ve been playing around a lot lately with Hyper-V, and I have to say I’m impressed. I’m well documented as a VMware Workstation guy, but I’ve had some issues with it and Windows Server 2008 R2. On a lark, I decided to dual boot my laptop with Windows Server 2008 and Windows 7 (I’m still at RC; haven’t had time to upgrade to RTM and with all of my speaking coming up, I’m leaving my configuration alone!). Interestingly enough, I find that W2K8 R2 consumes a bit less in the way of resources than Windows 7. But I digress.

I set up what I usually do in VMware – a demo cluster – and it was a breeze. The Hyper-V Manager tool is very straightforward. I like the editing of VMs a bit better in VMware’s tools, and the only real negative I have with Hyper-V at the moment is that you can’t (or maybe I’m missing something) drag and drop from your hard drive into the VM. One nice improvement over VMware is that I can cleanly shut down a Windows guest from the management tool. So I’d say both at this point are equally as good for my purposes, and for production, I would say MS has really caught up in the virtualization race. It’ll be interesting to see what happens in the next few years with the various hypervisors and where they take things.

One of the best features of W2K8 R2 and Hyper-V is Live Migration. Live Migration is the ability to take a virtual machine (which is running on a Windows failover cluster) and move it with no downtime to another node in a way that minimally impacts performance. SQL Server fully supports Live Migration. Unfortunately, what I can’t do on my laptop is demo Live Migration, and it’s a killer feature (and for SQL Server). The reason? You can’t enable virtualization in a VM (either Hyper-V or VMware). It just isn’t practical to schlep extra hardware and disks around everywhere, and I can’t count on having an Internet connection everywhere (nor can I necessarily rely on being able to get to my home machines if I have it configured). As someone who talks about clustering a lot, it puts me in a difficult spot. So right now I’m doing some work in a location where someone has graciously allowed me to configure this setup using a real SAN, and I’m documenting the heck out of it (including caveats) for Live Migration and SQL Server. Whether that winds up being a whitepaper for MS, or something I do on my own, it’ll get out there. It’ll be a nice companion to the paper MS already has on SQL Server and Hyper-V.

iSCSI
iSCSI is becoming more prevalent both in my use of it for demos and at client sites. However, realize that there are a few “gotchas” that you really do need to take into account. I will still maintain that using more traditional disk architectures for a production SQL Server implementation that is mission critical is arguably better in many cases, but that really isn’t my decision in the end. Just be aware of what iSCSI means for SQL Server. I’ve seen iSCSI work really well for SQL Server deployments, too, but it all boils down to planning.

1. Remember that with iSCSI, chances are you’re probably using a NIC, not a dedicated iSCSI HBA (which would be better). On a cluster, you must use a dedicated NIC in addition to the Public and Private NICs; iSCSI traffic can’t share either.

2. You are now dependent on your network for your I/O. Remember that SQL Server needs guaranteed writes. What happens if your network dies? Make sure your network infrastructure is robust and architected properly. This means things like dedicated networks for iSCSI so the traffic isn’t going out with the rest of the network traffic, redundant switches and such, etc.

3. Using NICs means some processor overhead. Account for that in your server sizing.

4. Test, test, and test some more. Make sure you not only run tests to see what the I/O capacity is from a hardware perspective is, and then run your workload. Know where the system will be stressed out. You may even want to try some more basic tests (like a file copy – I’ve seen that choke some iSCSI systems) before you attempt to test your workloads. This rule also applies to standard SANs, but is even more important with iSCSI.