Blog

Outages In An Increasingly Connected World

By: on February 14, 2017 in Administration, Business Continuity, Data Loss, Disaster Recovery, High Availability, Mission Critical | No Comments

I’ve been in the availability game a long time. It seems like at least once a week there is a story about a fairly major outage somewhere in the world due to software, hardware, public cloud failures, human error/incompetence, DDoS, hacking, or ransomware. In the dark ages of IT, when Twitter, Facebook, or any of the other social media platforms did not exist,sometimes you heard about these events. Today, we hear about them in real time. Unfortunately, they can also become PR nightmares, too. Below is a sample of some recent events, all of which happened after January 1, 2017:

The costs associated with these problems also seems to be increasing. Computer problems are no longer just small “glitches”. There are real consequences, whether these are man made outages, systems going down for some type of hardware or software failure, or something else completely. The elephant in the proverbial room is that this is all caused by “the (public) cloud”.<insert picture of old man shaking his fist> This is not true in many cases, but let’s be clear: public cloud providers have had their missteps, too. Technology is only as good as the humans implementing it and the processes around them. Technology won’t save you from stupidity.

Let’s examine some of these very public failures.

RIP Storage

The ATO has had some high profile outages over the last year, a lot of them storage related. Their latest run in with trouble was just a few days ago. This outage didn’t have data loss associated with it, but if you look at the ATO’s statement of February 8th, it’s pretty clear they are unhappy with their vendor. They’ve even hired an additional firm to come in and get to the bottom of things. In other words: in addition to all of the inconvenience and impact to end users and taxpayers, they’re spending more money to figure out what went wrong and hopefully fix the problem.

We’re database folks. Storage is fundamental to us in three ways: capacity (i.e. having enough space), performance (helps get those results back quicker, among other things), and availability (no storage, no database). You need to know where you are in relation to all of those for your database solutions – especially the mission critical ones. You also need to have a good relationship with whoever is responsible for your storage. I’ve been involved with and heard too many horror stories around storage outages and the mess they caused, some of which could have been prevented with good communication.

Oops

Ah, GitLab. What started out as something well intentioned (putting servers up in a staging environment to test reducing load), became the perfect storm. Before I go any further, let me say up front I applaud their transparency. How many would admit this?

Trying to restore the replication process, an engineer proceeds to wipe the PostgreSQL database directory, errantly thinking they were doing so on the secondary. Unfortunately this process was executed on the primary instead. The engineer terminated the process a second or two after noticing their mistake, but at this point around 300 GB of data had already been removed.

Hoping they could restore the database the engineers involved went to look for the database backups, and asked for help on Slack. Unfortunately the process of both finding and using backups failed completely.

Read the “Broken Recovery Procedures” section of that postmortem. It is very telling. Things went nuclear and recovery was messy, but in my opinion, something that could have been avoided. Most failures of this magnitude are always based in poor processes in place – it’s something I see time and time again. Kudos to GitLab owning it and committing to fixing them, but assumptions made along the way (and you know what they say about assumptions …) helped seal their fate. This cautionary tale highlights the importance of making backups and ultimately, restores.

Wait, There Are Limitations?

A few of these tales of woe are related to not knowing the platforms used.

I can’t say whose fault it was (developers? person who purchased said product? There could be lots of culprits …), but look at Instapaper. Check out the Root Cause section of the post mortem: they went down because they hit, and then exceeded, the 2TB file size limit that existed in an older verson of their underlying platform. That is something that anyone who touched that solution should have known, but had specific monitoring in place so that when things got close, it could be mitigated. I call a bit of shenanigans on this statement, but applaud Brian taking full responsibility at the end:

Without knowledge of the pre-April 2014 file size limit, it was difficult to foresee and prevent this issue.

It’s your job to ask the right questions when you take over (hence the accountability part). Now, to be fair, it’s a legitimate gripe assuming what is said is true and RDS does not alert you. Shame on Amazon if that is the case, but situations like that are the perfect storm of being blind to a problem that is a ticking time bomb.

Similarly, Code.org suffered the same fate. I’m not a developer by nature, but even I know that 4 billion lines of code is not a lot, especially when you have a shared model. Their own webpage alludes to over 20 billion lines of code written on the platform. Their issue is that they were using a 32-bit index which had a max of 4 billion rows of coding activity, and had no idea they were hitting their limit. I would bet that some dev along the way said, “Hey, 4 billion seems like a lot. We’ll never hit that!” Part of the fix was switching to a 64-bit index which holds more (18 quintillion rows), but famous last words …

On the plus side, this new table will be able to store student coding information for millions of years.

A Very Real World Example

In the US right now, there is the chance of a major catastrophe in Oroville, CA because of the Oroville Dam. There is nothing more serious than possible loss of life and major impact on human lives. I was reading an article “Alarms raised years ago about risks of Oroville Dam’s spillways” on the San Francisco Chronicle site, and like a lot of other things, it appears that there is a chance this all could have been avoided. Of course, with things of this nature, there’s a political aspect and a bit of finger pointing (as there can be in businesses, too), but here is a quote I want to highlight

Bill Croyle, the agency’s acting director, said Monday, “This was a new, never-happened-before event.”

It only takes once. I’ve seen this time and time again at customers for nearly twenty years. Nothing is a problem … until it’s a problem. No source control and you can’t roll back an application? No change management and updates kill your deployments and you have to rebuild from bare metal? You bet I’ve seen those scenarios and customers implemented source control and change management after.

Let me be crystal clear: hundreds of thousands of people and animals being displaced is very different than losing a few documents or some data. I have no real evidence they did not do the right repairs at some point (I’m not an Oroville Dam expert, nor have I studied the reports), and yes, there is always something that you do not know that can take you down, but statements like that look bad.

Pay Now or Pay Later – Don’t Be The Headline

Outages are costly, and to fix them will take more time, money, and possibly downtime to fix. Here are five tips on how to avoid being put in these situations and giving yourself a potential resume generating event:

1. Backups are the most important task you can do as an administrator. At the end of the day, that may be all you have when your fancy platform’s features fail (for any number of reasons, including implementing them incorrectly). More important than generating backups is testing them. You do not have a good backup without a successful restore. With very large sets of data (not just SQL Server databases – data can mean much more such as files associated with metadata that is in a RDBMS), finding ways to restore is not trivial due to the costs (storage, time, etc.). However, when you are down and possibly closed for good, was it worth it the risk only to find out you have nothing? No.

2. Technical debt will kill you. Having systems that are long out of support (and probably on life support) and/or on old hardware is a recipe for disaster. It is definitely a matter of when, not if, they will fail. Old hardware will malfunction. If you’re going to have to search eBay for parts to resuscitate a server, you’re doing it wrong. Similarly, for the most mission critical systems, planned obsolescence every few years (usually 3 – 5 in most companies) needs to be in the roadmap.

3. Have the right processes in place. Do things like test disaster recovery plans. How do you know your plans work if you do not run them? I have a lot to say here but that’s a topic for another day.

4. Assess risk and mitgitate as best as possible. Don’t bury your head in the sand and act surprised when something happens. (“Golly gee, we’ve never seen this before!”)

5. Making decision on bad information and advice will hurt you down the road. I can’t tell you how many times Max and I have come in after the fact and clean up somebody else’s mess. I don’t relish that or get any glee from it. My goal is to just fix the problem, but if you have the wrong architecture or technology/feature implemented, it will cause you years of pain and a lot of money.

Are you struggling with any of the above? Have you had or do you currently have availability issues? Contact us today so SQLHA can help you avoid becoming the next headline.

 

I’m a VMware vExpert and Schedule Updates

By: on February 10, 2017 in SQLbits, VExpert, Vmware | No Comments

It’s been a crazy (but good) start to 2017. Besides the foot of snow that got dumped on the Boston area yesterday, I’ve been busy with client engagments, writing the book and other things (those other things you should hopefully see soon), and took a bit of a break by getting away for a few days (the first time I took more than a day or two off in about two years). My batteries are recharged (something I talk about in this blog post from 2015), and to add to an already exciting year, I was named a VMware vExpert for 2017. I’m extremely humbled and honored. Thanks to VMware for recognizing me. I don’t just speak Hyper-V, you know 🙂 This is a nice bookend to my already dual MVP status with Microsoft.

I also have finally found some time to update our Events page with everything I have coming up – which is a lot! Max already put his stuff up, so I’m the slacker here. Besides SQL Saturdays in both Boston and Chicago, I’ll be back at SQLBits doing another (and all new …) Training Day as well as a session, and heading to Ireland for the first time to speak at the SQL Server Ireland User Group. I’m also eying something in London the week before (the last week of March) if we can make the stars align, so stay tuned.

We’ve been working on the training schedule for 2017 USA classes and hope to annoiunce things in the next few weeks – we just need to nail down one or two things before announcing anything.

Hope to see you at one of these upcoming speaking engagements!

2016 in Review, A Few Updates, and Looking Forward

By: on December 30, 2016 in Book, Classroom, PASS Summit, SQL Server 2008, SQL Server 2008 R2, SQL Server 2016, SQL Server V.Next, Teaching, Virtualization, Vmware, Whitepaper, Windows Server 2008, Windows Server 2008 R2, Windows Server 2016 | 7 Comments

Hello, everyone. Can you believe 2017 will be here in just a few days? 2016 seemed to fly by. It’s been quite a year. I wanted to take the time in my final blog post of the year to recap some of what has happened, talk about some stuff that is coming, and update you on some stuff, too.

The Elephant in the Room

There is the matter of the book. As many of you are (as well as I am) painfully aware, I officially announced in 2013. A lot has happened both personally and professionally between then and now. One thing I haven’t talked about but came to a head in late 2015 was my health. Without getting into specific details, from January of 2014 until almost the end of 2015, I was not in good shape. I hurt myself and over time, it got to the point where the pain was so bad I basically couldn’t sit, stand, or lay down without pain. I am a non-“medicate yourself” person, so I didn’t take anything for it. I tried to soldier through it. That was a huge mistake.

I can tell you that the amount of time I spend on the road speaking and going to customers didn’t help, either. There was just no time to stop and get off the road due to work commitments. Schlepping luggage, airplane seats, and everything else exacerbated what was going on. I was shattered by the time I hit hotel rooms. It didn’t help that my hand was also messed up as well which not only affected my ability to type but also my ability to play bass. Since I couldn’t stop using my dominant hand, that took about seven months to heal.

I did my best to put a brave face on things publicly, but if you saw me at PASS Summit 2015, you would have seen me at my low point in terms of how I was feeling. It was hard to miss the Leaning Tower of Allan. Right after PASS Summit, I went back to the doctor and spent the rest of the year and the first few months of 2016 in physical therapy. Luckily for the first time in nearly two years I had contiguous time I could deal with what was going on with me. Knock on proverbial wood, physical therapy took care of things and to date, I feel great. I haven’t felt this healthy in years. If you saw me at PASS Summit this year, it was clearly night and day.

What does that have to do with the book? Needless to say, as time dragged on, it became harder and harder for me to get my normal work done, let alone sit for hours on top of that writing. I’m not looking for a pity party or sympathy, nor am I absolving myself of anything book related, but anyone who has experienced excruciating pain to the point of it being debilitating knows what I am talking about. I’m not active/active, you know. All kidding aside, don’t be a martyr like me if you’re feeling bad: take care of it. I let things go to the point where I had no choice and if physical therapy did not work, surgery may have been something I needed to explore. Quality of life became a very real issue. To put a capper on 2016, I’m just getting over bronchitis which has sidelined me for a good portion of December.

So where does that leave things? I’m back on track despite the bronchitis. My spare time over the past few months (which hasn’t been much – we’ve been slammed with the day job which isn’t 9 to 5 …) has been spent working on the book and should really, truly be content complete over the next little while barring any unforeseen problems.

Before anyone asks, the book will still be covering SQL Server 2008 R2 especially because Microsoft recently announced that they are (unfortunately) now giving the super duper paid extended support option (Premium Assurance – see this and this) which gives up to 16 years of paid support on that as well as Windows Server 2008 R2. It makes what I’m doing more vital than ever since I’m crossing all the major versions of SQL Server and Windows Server. I’ve also made some other hard choices as to what will/will not be in the book:

  • SQL Server 2016 and Windows Server 2016 are now in scope
  • Yes, there will be public cloud-related content – not just on premises/physical stuff
  • SQL Server v.Next – including SQL Server on Linux – is not in scope. This will be part of the first major update to the book, timeframe TBD since we have no release date for v.Next. If you haven’t been paying attention, the paint is barely dry on SQL Server 2016 and we already have CTPs of v.Next (as of the writing of this blog, we’re up to 1.1).

I’ll wrap up this section with this thought: I never intended for things to go this way. I not only thought I’d be done, but I’d be on the updates by now. The road of good intention wound up being full of potholes which bent my rims and threw my car out of alignment. As I have mentioned before, my 2005 book which was much smaller in scope and size, took 3 years. I’m not happy about the circumstances, but there is light at the end of the proverbial tunnel. They say what does not kill you makes you stronger, right? Again, I’ll reiterate – don’t let health issues build up. Take care of yourself.

Let’s Talk PASS Summit 2016

Once I took care of my health, 2016 really took a big upswing. Besides all of the customer work we did which hasn’t slowed down, some of the highlights of the year included teaching in Australia, precons at SQL Nexus in Copenhagen and SQLBits in the UK, speaking at VMworld in Las Vegas, and the capper of them all: PASS Summit 2016.

I’m very fortunate that I have presented at PASS Summit most years and have had a preconference session for quite a few of the ones in recent memory. I was the first to introduce live labs three years ago and try to push the envelope each time my abstracts are accepted. I am glad PASS took a chance on me then – we had no idea how it would play out – and three years later, it keeps getting bigger and better. By now, we have a lot of the logistics stuff down pat since I keep getting selected. Let me be clear – I don’t assume I’ll get a precon because there are no guarantees. I know I’m lucky.

As with the previous years, we talked about capping the number who could sign up at around 100. There are a few reasons for this, not the least of which is making sure that I have enough proctors (i.e. things are manageable) and the convention center can handle the bandwidth needed. Much to my surprise, at one point when I checked in they had sold over 100 seats – PASS forgot to cap it. We then decided to cap it around 110. People still wanted to sign up, so we upped it to 120. Finally we said the heck with it, and filled the room (capacity: 136).

An additional 36 people does not sound like a lot of people, but from a backend and management perspective, it is. I’ve never done anything that big with labs. I had to talk to the folks hosting the VMs since everyone gets their own set (i.e. hundreds of VMs – not a small backend to have to account for), we conferred with the convention center, and on it went. I had so many people come up to me before and during PASS Summit telling me they wish they could have gotten in – as far as I know, I think I was the only sold out one on Monday (can’t speak for Tuesday). No pressure, right? I remember one encounter going down the elevator at the Hyatt heading over the morning of the precon. One of the conference attendees saw my badge and made the association. He mentioned how he wanted to get in but couldn’t. I mean, what can you say? I’m flattered and humbled by that demand. I never take any of this for granted, and would give the same energy if one person showed up or that sold out room of 136.

Things went off with only minor issues (power, which we took care of in a break; same issue as last year), and it was awesome to see that many people doing labs at once. I snapped this picture during the day.

Figure 1. 136 people doing labs. Glorious!

I also had a half day session on what was new for availability in both SQL Server 2016 and Windows Server 2016. It was an expanded version of a talk I had been doing for over a year, aided by the fact that Windows Server 2016 had just been released so I could demo things I couldn’t before. I had no idea what room I was in, and thought there may be some interest, but there were many good sessions at the same time. Much to my surprise – on the last day of PASS Summit no less – I was in a huge room (400+), and nearly every seat was filled for most of it. Below is a picture of one side as the room was starting to fill. I’d have to look at years past, but it may have been one of the biggest rooms I’ve talked in, and definitely one of the most full. Again, no pressure – just hundreds of people who can skewer you if you suck. Luckily that didn’t happen. I had my best scores ever for a PASS Summit for both the half day and the precon. No complaints, and I can’t say enough how good of an event PASS Summit was – and not just for me.

Figure 2. Room filling up for my half day session

 

VMware Whitepapers

In addition to contributing to and reviewing this whitepaper from VMware, I wrote one entitled “Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere” that was published in November. I didn’t blog about it, and some of you may have missed it. It was my first real return to writing released publicly in a long time. It’s not marketing fluff, and hope you find it useful.

Dual Microsoft MVP

One of the things I am very proud of is that back in July, I was not only re-awarded as a Data Center & Cloud Management (aka Windows Server nee Cluster) MVP, but I was also awarded as a Data Platform (aka SQL Server) MVP. There are other Dual Microsoft MVPs, but it’s nice to be recognized for the two things I do day in and day out.

Selected 2016 Numbers

0 – The number of laptops purchased by me in 2016. Yes, I’m still using the Vaio Z Canvas I got from Japan in June of 2015. More than 18 months is uncharted territory!

3 – The number of noise cancelling headphones employed by me this year. I got two new pair of headphones for travel at the end of the year. We’ll see which one stays. The old pair I used has already not made it back. One of these days I’ll do a blog post on what to consider for noise cancelling headphones.

4 – The number of bottles of Goober Grape I polished off this year.

5 – The number of countries I visited. Besides the USA, I was in these countries: Australia, Bulgaria, China, Denmark, and the UK.

45 – The number of flights I was on and how old I turned this fall.

100,000 – The approximate number of miles I spent in the air this year.

Too many – the number of (in)famous and influential people (not just entertainment folks) who died.

Training, Events, and Public Speaking in 2017

2017 is already shaping up to be a busy year. I’ll be back in the UK for two weeks at the end of March and early April teaching my 4-day Mission Critical SQL Server class in London via Technitrain and then it’s SQLBits 2017, where I’ll also be delivering a Training Day on April 6th.  I will also most likely be speaking at the London SQL Server User Group again the week I’m teaching my class. Register early – both the class and the Training Day are likely to be sold out!

I’ve submitted for a few SQL Saturdays (I usually do about a half dozen a year, give or take), and as those are confirmed, those will be added to the schedule. A few User Groups have approached me, so I’m trying to slot those in as well. Get your requests in early! I’m hope to speak again at VMworld (fingers crossed), and will of course, submit to PASS Summit again.

I’m still working on additional public dates (besides London) for my classes, and should have them nailed down in the next few weeks. Once that’s done, they’ll be posted and we’ll run a special. Stay tuned! In the meantime, if you need some training and would prefer us to come onsite, don’t hesitate to reach out. Get on our schedule early before it’s filled up.

A Note of Thanks

Whether you attended one of my classes or preconference sessions, saw me speak in person or online, read some of my writings, or more sometime this year – including reaching out to me asking where the book is even if you’re annoyed at me – thank you. I do not take anyone for granted and without you, none of what I do is really possible.

I would be remiss if I didn’t give a special thanks to my longtime friend and business partner, Max Myrick. 2016 was SQLHA’s best year yet.

Whew!

I only covered some of what happened. 2016 was a quite the year personally and professionally, and 2017 is looking better. I hope to see many of you next year, be it in person or online. Happy New Year!

Gotcha for Installing SQL Server 2016 and SSMS on Windows Server 2012 R2

By: on September 28, 2016 in SQL Server 2016, Windows Server 2012 R2, Windows Server 2016 | 21 Comments

If you’ve tried to install SQL Server 2016 on Windows Server 2012 R2, you may have run into an issue – KB2919355 may not be installed. Without this particular Windows update which is applicable to both Windows 8.1 and Windows Server 2012 R2, that means if you are trying to install SQL Server 2016 on a desktop running 8.1, you’d encounter this, too. The error in SQL Server Setup can be seen in Figure 1.

Figure 1. SQL Server 2016 cannot install with KB2919355 missing

Figure 1. SQL Server 2016 cannot install with KB2919355 missing

The new standalone SQL Server Management Studio installation has the same issue – it cannot be installed without KB2919355 installed as seen in Figure 2.

Figure 2. SSMS also needs KB2919355

Figure 2. SSMS also needs KB2919355

In my case, I created a new VM with a fresh installation of Windows Server 2012 R2. I also ran Windows Update to ensure it had everything Windows Server thought it required. Figure 3 reflects this status.

Figure 3. Windows is up to date

Figure 3. Windows is up to date

As you can see in Figure 4, KB2919355 is not listed as one of the ones WU installed, so it has to be an optional update.

Figure 4. Installed updates

Figure 4. Installed updates

Looking at the list of optional updates in Windows Update in Figure 5, 2919355 is not shown. This means you need to download and install it manually.

Figure 5. Optional updates available through WU

Figure 5. Optional updates available through WU

I went to the KB article page for 2919355 (link is below) and clicked on the link for the Windows Server 2012 R2 files and downloaded all of them. I did not look at the installation instructions (note: don’t ever do this … updates are fussy and why I am writing this blog post) and plowed head installing the executable associated with KB2919355. Cue the sad trombone sound as seen in Figure 6.

Figure 6. Cannot install KB2919355

Figure 6. Cannot install KB2919355

Going back and looking at the instructions, buried in the last step is what I lovingly call an “oh by the way” – you have to install KB2919442 (also not shown as an optional update in WU) first. Once you do that, things are smooth sailing.

So to install SQL Server 2016 on Windows Server 2012 R2, here is the installation order for these fixes:

  1. Download and install the update in KB2919442. This does not require a reboot.
  2. Download and install the update in KB2919355. Note that this has 7 files that you can download. To get SQL Server and SSMS installed, you really only need to install Windows8.1-KB2919355.exe. The others you may not need, and you only need clearcompressionflag.exe before running the 2913955 install if you have an issue.
  3. Reboot the server, as KB2919355 will require one once it is done installing. That means if you are going to do an in place upgrade or install an instance side by side, it will cause an outage to any existing SQL Server installation.
  4. Install SQL Server 2016 and/or SSMS.

If you are still having issues, you’ve got other problems going on that you will need to investigate. Hope this saves some of you some time.

Note that if you are using Windows Server 2016, you will not encounter this issue. Everything just works. If you want to take advantage of Windows Server 2016 with SQL Server 2016, contact us – we can help get you up and running with features such as Storage Spaces Direct which I blogged about a few days ago and SQL Server just officially announced support for at Ignite.

New SQL Server Benchmark – New Windows Server Feature

By: on September 23, 2016 in RDMA, S2D, SOFS, SQL Server 2016, Windows Server 2016 | 3 Comments

It hasn’t been widely publicized yet in SQL Server circles, but Intel just published a brand new benchmark with physical SQL Server 2016 instances and Windows Server 2016. There are a lot of good numbers in there, but the one that should raise an eyebrow (in a good way) is 28,223 transactions per second.

How did they do this? They used  new feature of Windows Server 2016 Datacenter Edition called Storage Spaces Direct (S2D). S2D is a new way to deploy a WSFC using “shared storage”, and it can be used either with Hyper-V VMs or SQL Server FCIs directly running on physical hardware. While in some ways it can be compared to VMware’s VSAN or something like Nutanix, the reality is that S2D is a different beast and can be accessed by more than just virtual machines (hence bare metal SQL Server 2016). I’ve demoed S2D in the past with older builds of the Windows Server 2016 Technical Previews, and I can’t wait to get my hands on the RTM bits soon.

S2D allows you to configure very fast local storage such as NVMe-based flash/SSD in each of the WSFC nodes and have those nodes then utilize it (no really … local storage for things like FCIs, and not just TempDB). Note in the picture underneath the specs the hardware is using RDMA NICs. In the immortal words of Jeffrey Snover “don’t waste your money buying servers that don’t have RDMA NICs”. This is true in the Windows Server world on physical hardware. VMware does not support RDMA or Infiniband as of now, but they recently added support for 25 or 50 Gb networks in ESXi 6.0 Update 2. It’d be great if VMware supported RDMA since it would really help with vMotion traffic. Time will tell!

UPDATE: It does look like VMware is edging towards RDMA see here and here for public evidence.

So what is RDMA? RDMA stands for Remote Direct Memory Access, which is a very (VERY) fast way to do networking. You can bingoogle to find more information, such as there are different flavors (RoCE and iWARP), and some say Infiniband and RDMA are one in the same. RDMA connectivity can revolutionize your storage connectivity and is great for things like Live Migration (and in the future, hopefully vMotion) networks. Its massive bandwidth enables things like converged/hyperconverged solutions because there is an insane amount of bandwidth and speed. Hyperconverged is the latest marketing buzzword bingo that every company uses a bit differently, so you’ll want to understand how each one is using it. Here’s the bottom line, though: fast networking is going to be the key to most things going forward including storage access. If you’re still on 1Gb or even just doing 10Gb, you should really consider looking at faster things.

I’ve been talking about RDMA and Scale Out File Server (SOFS) with SQL Server for years. SOFS, when implemented right, uses RDMA. SQL Server natively supports RDMA and SOFS – there’s nothing that needs to be done other than using SMB 3.0 (well, SMB Multichannel and SMB Direct) to store your databases and use something like SOFS to serve it up. In fact, a few years back, I designed and helped to implement a hybrid Hyper-V/physical FCI solution for a customer using RDMA and SOFS.  I remember the meeting where I proposed the RDMA aspect of the architecture – people looked at me like I had two heads because it is a left field concept in the SQL Server world. Six months later when we got into a lab, none of us had seen such speed and most of the concerns and doubts faded away. Having seen and played with S2D for over a year now, I’ve seen the potential for how it can be used with SQL Server, and Intel’s new benchmark confirms it. If you care about pure performance with SQL Server, this is going to be an awesome architecture (SQL Server + S2D).

Ignite is just around the corner with the official Windows Server 2016 launch. S2D is here. If you want to take advantage of the speed and power of Windows Server (including 2016), RDMA, S2D, SOFS, Hyper-V, or vSphere (especially when RDMA is released) for SQL Server, contact us. It’s a brave new world, and SQLHA can guide you through it.