Happy Friday, everyone. It’s been a crazy few weeks between heading over to the UK to speak at SQLBits, quite a bit of customer work, and then being in Redmond this week for the MVP Summit. I had a great Training Day at Bits, and if you ever get the chance to go, Bits should be one of your SQL Server destinations. I learned quite a bit at MVP Summit this week. There’s a lot I wish I could talk about, but all in good time.
Speaking of training, I’ll be announcing 2018 classes and dates soon. There are some things I need to button up on this end of things before I do. That said, there is one class that I announced recently that some of you may not know about. For the first time in a few years, I’ll be teaching a one day class on Friday, April 6, near home in the Boston area at the Microsoft office in Burlington, MA. It’ll be nice not to have to hop on a plane but drive about 15 minutes up the road. It came about because the New England SQL Server group approached me, and I am happy to do it with them.
The class I’ll be teaching is “Planning SQL Server Availability Solutions in a Physical and Cloudy World”, and yes, there will be a lab. The price of the class is $250, but through the end of today, it is $50 off knocking the price down to $200. If you register after today but before the end of Friday, March 16, you’ll get $25 off which brings the price to $25. For information and how to register, click here.
I’m proud to announce that the Mission Critical Moment is now live on SQLHA. You’re probably asking yourself, “So Allan, what exactly is the Mission Critical Moment?”
Max and I have been wanting to add some form of video content for quite some time, but wanted to think about the best way to put it out there. Throwing up content for the sake of it doesn’t work for us. I’ve done my share of recorded videos for other folks in the past, so I’m definitely no stranger to pre-recorded, non-live stuff.
Max and I came up with some guiding principles:
- The videos must be short (under 15 minutes, ideally 5 – 7), focused, lively, and easily consumable. In other words, they are bite sized morsels/nuggets where you don’t have to carve out long lengths of time to watch. They also need to be, where appropriate, a bit lighthearted. Not every Mission Critical Moment will be super serious and tackle a specific tip, trick, or bit of information that is the differerence between up and down
- The video content has to be free with no strings attached. You do not need to sign up for our newsletter to see the Mission Critical Moment, nor do you have to create a login to see these behind a gated wall. If you want to sign up for our newsletter, feel free to do so – we’d love that, but you shouldn’t have to be “part of the club” to see the Mission Critical Moment.
- The videos shouldn’t require action (within reason) on your part to do anything other than watch. Something like asking you to download it was out of the question for us.
- The videos should be easy to find. No digging around on Youtube or anything like that, which meant hosting them on our site directly.
- There will be at least one per month.
What are you waiting for? Click here to see #1. The first Mission Critical Moment was a lot of fun to do and its topic is something I am truly passionate about.
The Mission Critical Moment is the first in line of a bunch of exciting things SQLHA is rolling out in 2018.
Sometimes when I speak or in some of my writings, I discuss the cost of downtime and how knowing that number can help you devise a better solution. That number is often company-, and sometimes industry-specific. For example, if processing credit cards, a company may have a financial hit from the customer if it cannot process a transaction either fast enough or, worst case, not at all. That adds up when you have even a five or ten minute outage. Processing a credit card transaction is not the same as loss of life in a hospital, hence needing to account for a system and its solution individually.
However, as of this week, if you have a company or work in the UK, things just got a whole lot more interesting. The UK government officially released a statement on January 28 which affects “critical industries”. Long story short: if you fall under the classification which seems to be limited right now to energy, transport, water, and health firms, you could be fined up to £17 million ($24 million in US Dollars at today’s exchange rate) in the event of a cyber attack taking you down. It was the WannaCry outages that precipitated the response (as an example, FedEx says WannaCry cost them about $300 million US Dollars). Remember this doozie from British Airways? Also covered under this new Network and Information Systems (NIS) Directive; it’s not just about security, but includes other things like power outages, hardware failure, and environmental hazards.
The NIS Directive is effective as of May 10, 2018, and is essentially based on this Consultation on the Security of Network and Information Systems Directive from August of 2017, and the outcome/latest is the document Security of Network and Information Systems: Analysis of responses to public consultation which was just published. I wouldn’t be surprised to see other places around the world adopt a similar stance. For some this may proverbially add insult to injury since everyone is already dealing with GDPR which goes into effect May of 2018 as well.
I’ve always talked about how security is a key component of availability. The UK government is literally putting their money where their mouth is. The NIS Directive isn’t meant to start with fines. The press release states the following:
Fines would be a last resort and will not apply to operators which have assessed the risks adequately, taken appropriate security measures and engaged with regulators but still suffered an attack.
That is actually good news – it’s not shoot, aim, fire. However, what that means is that you need to do the right steps to be prepared to avoid it if possible. That includes things like patching servers and having a strategy to do so in a timely manner is going to matter. Things like the recent Spectre/Meltown chip flaws (which I put everything you need to know as it relates to SQL Server in one place here) will not be a “kick the can down the road” exercise. To that point, I’m still seeing people saing they don’t need to worry about patching for Spectre and Meltdown. YOU DO. Yes, it sucks you may see a performance hit, but would you rather be down instead? I do not think so. I’d rather be slower and up than down and out.
It is always better to be proactive than reactive, and SQLHA can certainly help you assess where you are. We can help address and mitigate issues related to availablity and disaster recovery (which would help with things like accounting for power outages and hardware failure), but also devise realistic patching strategies that work. Max and I have done these types of things for some of the largest systems in the world over the course of our careers. It doesn’t matter if you are a small company or one of the biggest in the world – we’re happy to help! Just reach out.
Is anyone else bothered by the word “serverless” when it comes to computing – especially in the cloud? The workload you are running, website you are surfing, or bauble you are buying is being served up somewhere on a backend. That backend is comprised of servers even if they are not in your own data center. There’s no magic compute dust at work.
Having said that, infrastructure as a service, or IaaS, is largely based on you accessing servers you configure and control on a backend. If you’re using Azure, AWS, GCP, or any of the other cloud platforms, it’s a virtual machine (VM) running on a hypervisor. So if your company is running ESXi, Hyper-V, Xen, or another hypervisor on premises and you have been running VMs, what you would be using in the cloud is the same … just more abstracted from you.
The problem as we saw with on premises virtualization is sizing. When you want to start doing IaaS-y things in the cloud, you actually need to know the capacity to rightsize. Why? If you don’t, you will either overspend (costing you money), or undersize and have poor performance, which means you’ll need to spend more money to fix the problem. When you own the servers and the platform on premises, it is usually easier to correct this problem. This is not always true. Virtualization was not a panacea. Over the years, both Max and I as part of working with customers have seen virtualized SQL Server environments that were not rightsized, and it caused quite a bit of agita.
The whole premise of virtualization and IaaS in the cloud (I’ll touch on other cloud-y things in a minute) is that you can give things the resources they want. When we went through the waves of consolidation in the mid-2000s which opened the door to virtualization later, a lot more care was put into those consolidations. Early virtualization efforts were often done via physical to virtual (P2V) conversions whereby if you had a server that had P processors and M amount of memory, that’s what the VM was assigned. That’s not rightsizing; that’s lift and shift. You may have been able to sunset the physical hardware, but that’s about it.
To properly rightsize an environment, you need to baseline and benchmark your servers and applications to accurately know what resources they are using. That also allows you to understand how it is growing to plan for the future and have the capacity for that, too. Without that information, you might as well lick your finger, stick it in the air, and try to see which way the wind is blowing because you certainly won’t know what to get as you transition to the public cloud providers. Using Azure, AWS, or GCP is a much more viable option for many folks, but when you’re picking your server, as stated above, if you don’t know what size IaaS VM or storage to select, you will be met with a lot of problems like many of the early SQL Server virtualization attempts went down in many companies. We help out customers all the time with capacity management; it’s very important for long term health of your deployments.
The one thing that the cloud providers do which we often see that many on premises customers do not do is quality of service, or QoS. QoS is a very important concept. In a nutshell, QoS means you’re guaranteed something. For example, if cloud provider X says you’ll get 10,000 IOPS with said storage, you’ll get 10,000 IOPS. On premises virtualization has the same concepts, and if you’re seeing spike-y performance with your VMs, it’s definitely one place to look.
If you’re using Amazon’s RDS or Azure SQL Database, that’s not IaaS; some may call it software as a service (SaaS), but more accurately, it’s database as a service (DBaaS). Amazon and Microsoft are giving you a database that is based in the cloud. You do not manage the instance, nor do you worry about things like performance. Those immortal words “it just works” applies here. Microsoft will soon offer managed instances of SQL Server in Azure so you can have a whole instance that is yours, but without any of the things that come along with IaaS.
For all of these, you still need to measure performance, and if you’re just starting on your journey to the public cloud, you really need to know your numbers prior to making the leap or you might wind up like Icarus and get your wings clipped the hard way. Don’t be that person. One of the things we do for our customers is to help them transition to their next generation platforms and architectures, be it new versions of SQL Server or Windows, Linux, on premises (physical or virtual), hybrid solutions of on premises and the cloud, or going whole hog up into Azure, AWS, or GCP. If you want some help figuring all of this out, including things like baselining and benchmarking to designing the whole thing or anything inbetween, contact us today and we will ensure your transition to the future keeps you soaring high, not falling to the ground.