Blog

Windows Server Is Still a Thing and SQL Server Still Runs on Top of It

By: on October 18, 2018 in Hyper-V, Linux, SQL Server 2017, SQL Server 2019, Windows Server, Windows Server 2019 | No Comments

If you’ve been in hibernation, today you woke up to a world where Microsoft has embraced open source and Linux. What was once unthinkable is now happening. What is going on? Why am I even talking about this?

Since the introduction of SQL Server 2017 and the support for Linux-based deployments, I’ve had a steady stream of questions from C-levels on down to DBAs asking in essence this: “Do I need to abandon SQL Server on Windows Server and learn Linux?” I would use something stronger if this was a casual conversation, but the answer is an emphatic “NO!” SQL Server still runs just fine and is supported on Windows Server (including Windows Server 2019, which is just released). Support is not ending any time soon. Linux is just another option and there may be enhancements specific to each platform because of their differences. It’s not an “either/or” thing. So breathe, OK? If you have a use case for Linux, by all means deploy SQL Server on it.

Just last month at Ignite while working the SQL Server area, I heard a lot of these same statements, but this time from a largely non-SQL Server-centric crowd. Sure, SQL Server 2019 deepens things with things like AGs on containers and the Big Data Cluster story which is right now based on Linux-, not Windows-based containers. I am hoping they work out things so Windows-based containers are supported for SQL Server in these advanced configurations, but remember that Windows and containers is much newer than Linux and containers. Let’s see how this shakes out before you start having nervous sweats. The Windows Server story for containers is much better now than it was.

This isn’t the only thing I’ve been hearing. There’s a rumbling in some corners that many feel Microsoft is abandoning Windows Server and it’s all about Azure. That’s just simply not true. Do I think Microsoft’s marketing has been amiss the past few years with regards to Windows Server which has had an unfortunate effect of de-emphasizing it? Yes. I may be a Microsoft MVP, but that’s my honest opinion. Windows Server, quite frankly, hasn’t seen a lot of love and I want them to correct that. There are some awesome features such as Storage Spaces Direct (which I’ve blogged about and has its challenges with SQL Server, but it’s still a big win for Windows Server). People just don’t konw about them.

Let’s be honest: there has been a LOT of marketing/push for Azure, and I think people are hearing Azure in a way that says to them Microsoft does not care about Windows Server. Nothing could be further from the truth. As a Cloud and Datacenter Management MVP (subcategory High Availability, i.e. Cluster), I’m a Windows Server MVP in addition to my SQL Server MVP award so I see both sides of the coin. I can tell you the Windows Server PMs give a hoot and there are a LOT of improvements in Windows Server 2019. Here’s a link to a blog post that links to many of the Ignite presentataions that talk about what’s new in Windows Server 2019.

While we’re at it, there’s also a negative impression around Hyper-V. As I am also a VMware vExpert, I will be the first to tell you that I see more vSphere than I do Hyper-V.. I am fluent in both hypervisros and have helped customers implement them. I conducted an informal poll yesterday on Hyper-V usage and by golly, there are people using it in the wild for on premises virtualization of SQL Server. That said, the predominant vendor I see is VMware. I’d be lying if I said otherwise. Hyper-V is a “just another” feature of WIndows Server, while vSphere is a product of VMware. I think it makes a fundamental difference in perception.

At the same time, I can see where these perceptions are coming from. I hope SQL Server ups its Windows Server game to not make people think support on it is going away, and Windows Server needs to be talked about in general more. I love Microsoft, but I’ll praise when necessary, and criticize when they need it. I’m doing the latter here, but it comes from a place of caring.

Do you feel SQL Server is de-emphasizing its use of Windows Server? Do you see that as a problem if it winds up being the case a few years down the road? Do you think Microsoft is abandoning Windows Server or due to the over emphasis on Azure, or has made it seem irrelevant? Do you think Hyper-V is still viable? Let me know your thoughts below.

We Need RDMA for Availability Groups and in All Public Clouds

By: on September 25, 2018 in Availability Groups, AWS, Azure, FCI, GCP, IaaS, Linux, PaaS, Public Cloud, RDMA, SQL Server, Windows Server | No Comments

Hello from Microsoft Ignite on day two.

Yesterday was a big day between all of the Windows Server 2019, Azure, and SQL Server 2019 announcements. Others have covered things like new features at a broad glance (here’s the official list from Microsoft). I’ll get into some of those things over the next few weeks when I get some time to play with the bits for SQL Server 2019 and discuss things like the SQL Server Big Data Clusters. However, now that SQL Server 2019 CTP 2.0 has been officially announced, there’s something I want to address: the network transport for Always On Availability Groups (AGs) and how it must be improved.

Let’s Talk Disks

One of the keys to success for any AG configuration is the speed of the disks on all replicas – primary or secondary. If you want synchronous data movement, any secondary replicas have to be as fast or faster than the primary to keep up (network speed matters, and I’ll address that here in a minute). If you add read only workloads to a seconary replica, that increases disk usage, so again, speed matters.

We are beyond simple SSDs. The real speed is no longer there. Most people are looking at NVMe drives these days, which are faster flash-based drives than “traditional” SSDs. However there is a new (yet old) kid in town: persistent memory aka storage class memory aka PMEM. “Straight” memory is always going to be faster than going to disk due to the way systems are architected internally. Back in the day we had things like RAM SANs, but the idea of using memory for storage is coming back around again in the form of PMEM. SQL Server 2016 initially supported PMEM (NVDIMM only, and I think just NVDIMM-N) specifically for the tail of the log caching (see Bob Dorr’s blog post). Capacity for persistent memory was a bit small, so it made sense. SQL Server 2016 (and later) on Windows Server also now supports PMEM if the PMEM was formatted as block-based storage (i.e. it was configured like a normal disk to Windows Server).

There are two PMEM enhancements in SQL Server 2019:

  • Support for Intel Optane
  • For Linux, SQL Server data, transaction log, and In-Memory OLTP checkpoint files can now be placed on PMEM in enlightened mode

Many newer physical servers have slots for persistent memory; it’s not a passing fad.

What’s the point here, Allan? Whether done right with newer NVMe drives, PMEM, or both, you can get blazing fast IOPS for SQL Server. This is good news for busy systems that want to use AGs. However, there is a looming problem especially with these speeds: the network.

Why Is Networking Your Next Big Problem?

The stage is set: you’ve got blazing fast storage and a busy database (or databases) in an AG, but your network is as slow as two tin cans connected by a string. It won’t matter if your disks came straight from setting a a record at the Nürburgring. A slow network pipe will choke the ability to keep an AG that is synchronous in a synchronized state even with compression enabled. It’s that simple. The same could be said of a large database using seeding for the replicas.

Enter RDMA

I’ve talked about Remote Direct Memory Access (RDMA) in the past in two different blog posts (New SQL Server Benchmark – New Windows Server Feature and Windows Server 2012 R2 for the DBA: 10 Reasons It Matters), so I’m not going to rehash it in any kind of depth. TL;DR it’s really, really fast networking that at least on Windows Server, is lit up automatically when you have everything in place. However, not everything can use RDMA. Things such as SQL Server’s tabular data stream (TDS) need to be enabled for use on RDMA, just like Live Migration traffic in Hyper-V and SMB 3.0 (SMB Direct) was. SMB Direct can be used with FCIs, and has been supported for some time. It’s part of that benchmark linked.

Some good news, though:

  • Windows Server and Linux both support RDMA (I’m not sure about containers, though … I’m guessing not, but I’d need to dig more)
  • Both Hyper-V (Build 1709 or later) and ESXi (6.5 or later) now support RDMA inside guest VMs. The bad news: ESXi only supports it for Linux.

My Call(s) to Action

1. The Windows Server and SQL Server development teams need to work together to enable RDMA for AG traffic on Windows Server (which would most likely be Windows Server 2019 in a patch, or later; don’t hold your breath for Windows Server 2016), and SQL Server needs to get RDMA working on Linux.

2. VMware needs to support Windows Server workloads with their PVRDMA adapters. VMware really is missing an opportunity here.

3. We need RDMA for IaaS VMs in the public cloud that can be used with SQL Server. This is for two reasons: a) Storage Spaces Direct for FCIs b) AGs if RDMA traffic is enabled. For Azure, this would be enabled by the Azure compute and/or networking teams. Azure has some IaaS SKUs with RDMA networking so it’s possible, but they are for HPC and not general use such as D- and G-class VMs. There’s no RDMA that I can see in EC2 or GCP, so I think those are pipe dreams, but for those who want FCIs, it sure would be great to be able to deploy S2D right now and have it work well, and then also work for AGs down the road. Azure is our best hope here.

4. Assuming #1, Azure needs to enable RDMA so that Azure SQL Database and Azure SQL Database Managed Instance can take advantage of RDMA, and make sure it does things like work across Availability Zones in a region.

That’s it. I’m not asking for the sun, moon, and stars. Most, if not all, of this is doable. There’s already precedence for supporting RDMA for SQL Server via FCIs on Windows Server, and that also needs some cloud love if you want to use S2D up there. RDMA needs to be brought over the finish line for all of the SQL Server availablity scenarios regardless of platform. In a cloud first option, we should not be saddled by slow inter-server connectivity.

Where to Find Allan at Microsoft Ignite

By: on September 21, 2018 in Conference, Ignite | No Comments

Next week (September 24 – 28) I’ll be attending Microsoft Ignite in Orlando. I’ll be around Monday – Thursday, and will be working the Data & AI solution area in the Microsoft Showcase. Here’s when I’m scheduled to be there:

  • Monday, September 24 4:45 – 7:30 PM
  • Tuesday, September 25 12:00 – 2:00 PM and 5:00 – 6:00 PM
  • Wednesday, September 26 9:30 AM – 2:00 PM and 5:00 to 6:00 PM
  • Thurday, September 27 11:30 AM – 1:15 PM and 3:15 – 5:15 PM

Chances are I will also probably be hanging around the Data & AI or WIndows Server booths even when not scheduled, so you’ll find me. Also feel free to contact me via the SQLHA website or the Ignite app.

I look forward to seeing some of you next week.

Classes in London – October 2018

By: on September 18, 2018 in Availability Groups, Cloud, Teaching | No Comments

Happy Tuesday, everyone!

If you haven’t seen, I’m going to be coming to London next month and teaching two classes. It’s been a few years since I’ve done full courses in the UK, and I’m happy to be partnering with Neil Hambly’s company DataMovements to bring these classes over the pond. The two classes are:

Until September 28th, there is a 20% discount on the pre-VAT price. There will be a 10% discount for two weeks after that.

Both classes will feature my (in)famous lab exercises that I’ve incorporated into my classes for years, so you will get hands on experience as well as real world, practical instructional content.

Hope to see you there.

Don’t Let a Natural Disaster Be One For Your Systems

By: on September 13, 2018 in Disaster Recovery, High Availability | No Comments

With Hurricane Florence heading towards the Carolinas here in the US and dominating a lot of the news, it puts many things into the limelight – not the least of which is this question: is your business ready if such an event happened in your area?

Every part of the world has challenges with Mother Nature in some way. Here in the Northeastern US, we generally only see Nor’easters which can knock out power, the West Coast certainly gets earthquakes and there is a potential tsunami threat in places, and so on. You get the idea. There is only so much you can do in a physical data center to counteract all of this. Man made events, including hacking, also falls under this disaster recovery category, but there are defenses you can generally account for in most (but not all) scenarios. You cannot stop a hurricane coming at 140 miles per hour; you can have redundant links to prevent a network outage if your telco cuts a trunk.

The reality is you can never protect against every single scenario planned or unplanned, but you can do your best to ensure that once the event is gone, people are able to get into work, and life starts to get back to relative normal, you have a business to come back to. The famous shots in newscasts and in pictures of people boarding up houses and businesses is one way to do this; the goal is to minimize physical damage. But do you:

  1. Have a plan to properly shut down your systems and bring them back up?
  2. Have a way to restore or rebuild your physical systems/servers in the event they are destroyed?

If the answer to both of these is not a resounding “Yes!”, that’s a problem. For over 20 years I’ve been in the availability business. I’ve helped customers of all sizes from small shops to large enterprise companies. FCIs, AGs, log shipping, etc. – all are great features. But when floods take out your data center, what do you have? For the most part, dead servers which most likely need to be replaced (and possibly the data center, too). You need to start with backups and/or the software to rebuild those systems.

The good news is that with the rise of the public cloud (Amazon’s various offerings, Microsoft’s Azure, GCP), disaster recovery is not completely impossible. Even just storing backups in “cold” storage up in the cloud makes them available in a way that was not an option not too long ago. Most companies, such as Microsoft, can get you the software you’re licensed to use via websites and no longer are you relying on DVDs. Heck, you can build systems up in the public cloud with IaaS VMs that extend your on premises solutions that you can flip to in the event of your main data center not being online.

In a worst case scenario, assuming your SQL Server databases are not hundreds of terabytes or petabytes, back that stuff up to an external disk and take it with you. Of course, you should protect it properly (i.e. encrypted backups, password protected, etc.) since you do not want sensitive data falling into the wrong hands, but for heaven’s sake DO SOMETHING!

Do all of these things have costs, especially cloud-based solutions? You bet! In no way am I claiming is any of this free, but what is the cost associated with bringing you business back online? What is the cost if it cannot come back online? The cost of “cold” storage in the cloud is much less than never coming back.

If you’re in the path of Florence, I truly hope you are safe and that its effects are not devastating. If you want to ensure your business is resilient, it’s not too late to start thinking about how to protect yourself in a worst case scenario. Contact us today to devise the right disaster recovery strategy for business continuity. We can help you minimize and possibly eliminiate your downtime.