It hasn’t been widely publicized yet in SQL Server circles, but Intel just published a brand new benchmark with physical SQL Server 2016 instances and Windows Server 2016. There are a lot of good numbers in there, but the one that should raise an eyebrow (in a good way) is 28,223 transactions per second.

How did they do this? They used  new feature of Windows Server 2016 Datacenter Edition called Storage Spaces Direct (S2D). S2D is a new way to deploy a WSFC using “shared storage”, and it can be used either with Hyper-V VMs or SQL Server FCIs directly running on physical hardware. While in some ways it can be compared to VMware’s VSAN or something like Nutanix, the reality is that S2D is a different beast and can be accessed by more than just virtual machines (hence bare metal SQL Server 2016). I’ve demoed S2D in the past with older builds of the Windows Server 2016 Technical Previews, and I can’t wait to get my hands on the RTM bits soon.

S2D allows you to configure very fast local storage such as NVMe-based flash/SSD in each of the WSFC nodes and have those nodes then utilize it (no really … local storage for things like FCIs, and not just TempDB). Note in the picture underneath the specs the hardware is using RDMA NICs. In the immortal words of Jeffrey Snover “don’t waste your money buying servers that don’t have RDMA NICs”. This is true in the Windows Server world on physical hardware. VMware does not support RDMA or Infiniband as of now, but they recently added support for 25 or 50 Gb networks in ESXi 6.0 Update 2. It’d be great if VMware supported RDMA since it would really help with vMotion traffic. Time will tell!

UPDATE: It does look like VMware is edging towards RDMA see here and here for public evidence.

So what is RDMA? RDMA stands for Remote Direct Memory Access, which is a very (VERY) fast way to do networking. You can bingoogle to find more information, such as there are different flavors (RoCE and iWARP), and some say Infiniband and RDMA are one in the same. RDMA connectivity can revolutionize your storage connectivity and is great for things like Live Migration (and in the future, hopefully vMotion) networks. Its massive bandwidth enables things like converged/hyperconverged solutions because there is an insane amount of bandwidth and speed. Hyperconverged is the latest marketing buzzword bingo that every company uses a bit differently, so you’ll want to understand how each one is using it. Here’s the bottom line, though: fast networking is going to be the key to most things going forward including storage access. If you’re still on 1Gb or even just doing 10Gb, you should really consider looking at faster things.

I’ve been talking about RDMA and Scale Out File Server (SOFS) with SQL Server for years. SOFS, when implemented right, uses RDMA. SQL Server natively supports RDMA and SOFS – there’s nothing that needs to be done other than using SMB 3.0 (well, SMB Multichannel and SMB Direct) to store your databases and use something like SOFS to serve it up. In fact, a few years back, I designed and helped to implement a hybrid Hyper-V/physical FCI solution for a customer using RDMA and SOFS.  I remember the meeting where I proposed the RDMA aspect of the architecture – people looked at me like I had two heads because it is a left field concept in the SQL Server world. Six months later when we got into a lab, none of us had seen such speed and most of the concerns and doubts faded away. Having seen and played with S2D for over a year now, I’ve seen the potential for how it can be used with SQL Server, and Intel’s new benchmark confirms it. If you care about pure performance with SQL Server, this is going to be an awesome architecture (SQL Server + S2D).

Ignite is just around the corner with the official Windows Server 2016 launch. S2D is here. If you want to take advantage of the speed and power of Windows Server (including 2016), RDMA, S2D, SOFS, Hyper-V, or vSphere (especially when RDMA is released) for SQL Server, contact us. It’s a brave new world, and SQLHA can guide you through it.