Blog

Making Lemonade Out of Lemons

By: on February 1, 2021 in Advice | No Comments

The last time I did any appreciable travel pre-COVID-19 was late January 2020. I went down to NYC for a customer meeting. My next trip was supposed to be SQLBits in the UK in March which never happened for obvious reasons. For the better part of the past 20 years, I’ve flown 50 – 100,000 miles per year. Some people I know fly more than that. There are two ways I could have handled the sudden and enforced change in my life: look at it as a problem and be miserable or lean into it and make something of it.

At first, I must be honest and say I was bummed and missed being out there. What I came to realize is that this past year has become a blessing in disguise. Let me explain.

Nearly 20 years on the road does things to your life – both good and bad. There were stretches I was home and sometimes travel came in spurts, but I didn’t earn lifetime status with American Airlines and Marriott because I am a homebody.

What I realized quickly in my forced “staycation” is that I was weary. All those years on the road which came to a sudden halt was like getting off a treadmill still running at full speed. I needed a reset both mentally and physically. Just as COVID was kicking in, I was dealing with a hand issue that was not only hampering my ability to type (affecting work) but also made it hard to play bass. I also realized I had a backlog of projects and other things personally and professionally that were long neglected for one reason or another.

For example, I’ve never worked in an office when at home. I not only set one up, but I figured out a lot of stuff which will influence some exciting things I will announce in the upcoming months. I just never had dedicated time to deal with setting one up when I was home a few days here, maybe a week or two there. Even when I was home, I was working all the time or trying to squeeze one more rehearsal in because I may not be able to play for weeks on end. I didn’t allow myself to slow down. I had to cram something into every moment. That’s normal until it isn’t.

A big silver lining is that 2020 was a busy year for SQLHA. I basically didn’t have downtime all year and just this past week was the first one I took a breath. I am VERY thankful for the work as I know some were not. We have great customers (why aren’t you one? contact us today ;)). I never take it for granted.

Even with being busy, I made myself a priority. I not only feel the best physically I have in years but also rejuvenated. Part of that rejuvenation is being inspired. For example, before these two recent blog posts, I realized I hadn’t written one in nearly six months. I just wasn’t feeling it.

Did 2020 suck in many ways? You bet. It was not all sunshine and roses. I’ll just say this: I certainly have more salt than pepper in my hair after 2020.

Realistically I probably won’t be hitting the road in any meaningful way until later in 2021 or early 2022 since vaccines are just rolling out. Most countries are closed for visitors. Even travel here in the US is iffy to me at the moment. I’m going to continue making lemonade out of lemons by checking items off the aforementioned list of backlogged projects. I know I can’t tackle them all at once, but it feels good to be making forward progress. I am also making sure I give myself some time, too. Self-care matters. You’re no good to anyone if you’re exhausted.

When the time comes, I’ll be grateful to be out and about again and am looking forward to seeing all of you outside of a screen. I know one thing for sure: I will appreciate it that much more. Until then, stay safe and healthy.

Closing in on 30 Years in the Database World

By: on January 28, 2021 in Advice | 1 Comment

It was the fall of 1991. I applied for and landed a quality assurance internship at SQL Solutions in Burlington, MA which was located at 8 New England Executive Park right next to the Burlington Mall. SQL Solutions was purchased by Sybase not long after. I kept that internship through the end of graduation in 1994 and there’s no question it literally set up the rest of my professional career.

I always find it interesting how one thing, one event, one interaction can sometimes change the course of your life – good or bad. I’ve been on both sides of that.

I always went home in the summertime, but when back in MA for college, I was at Sybase twice a week (Tuesday and Thursday if memory serves me correctly). Two of the people I worked closely with and were my direct supervisors were women I still have the utmost respect for today. I’ve never seen gender, color of someone’s skin, religion, etc., as a problem then or now. No one should. I learned a lot in those two years, and I am forever thankful to everyone I worked with at Sybase. In fact, one of the people I met I wound up playing years later in various jazz ensembles (he’s a guitar player). Small world.

Part of me thought I’d land a permanent job at Sybase, but that wasn’t meant to be, and I believe things worked out how they should have. The skills I acquired in those two years gave me the foundation to land my first job post-college doing QA using FoxPro (then Visual FoxPro) for a company that did medical software. In reality, it set me up for life since I’m still tinkering with databases today.

In a harbinger of things to come, I once got in trouble in that job because I automated part of what I was supposed to do using Visual Basic and Excel macros. The results were predictable, so why would I eyeball that? Let a process tell me if it was good or bad. Was I DevOps before DevOps was cool? (kidding here, folks) That little “stunt” landed me on a PIP which I survived but it taught me a valuable lesson.

I can see why automating rubbed some the wrong way – here’s this young upstart at the tender age of maybe 23 trying to tell us how to do our job. I was just trying to free my time up to be more efficient and save my time for the stuff that needed my attention. It’s funny how automation and other things are commonplace now but were not then.

The events of the past year when we’ve all been stuck at home has given us time to reflect and see how we want or need to move forward and improve. I have always looked forward, but that doesn’t mean you can’t look to the past for inspiration. I look fondly on the early years of my career. In some ways I’ve come so far, but in others, I haven’t gone very far at all. I think I found my calling early if I wasn’t going to be a full-time musician but I marched to the beat of my own drummer and I’d like to think I still do to a degree. I’m certainly older. Wiser and smarter? I hope so. Better filter? Those who know me will have varying opinions here.

I’d like to think my success to date has outweighed my failures and shortcomings. I’m a work in progress. We all are. This is why even 30 years in, I try my best to learn and do new things knowing in some cases it will be difficult or I may be ahead of the curve.

A recent interview with Tomo Fujita by Mary Spender struck a huge chord with me. Paraphrasing what Tomo said at one point is this: teachers are not better than students and can learn from them even though they are still helping the student. This one quote stuck with me: “(A) teacher is just a great student.” Fast forward to about 8:37 if you want to see that interaction. It’s such a good interview where even if you are not a musician, you can apply a lot of what Tomo says to your life beyond that little bit.

I’m no longer that early 20-something but I learn something from all of my customers and students (including anyone who attends a webinar, etc.). I may not be the smartest person in the room and I’m totally fine with that but I’m at the proverbial table for a reason and it’s not just my boyish good looks and charm (laugh – it’s a joke). I’ll never try to be all things to all people; it’s impossible. I know my limitations. I can only attempt be the best me I can be at any given moment.

That 20-something also couldn’t have imagined one internship would one day allow him to travel the world and speak to, work with, and teach thousands of people. There are very few things as humbling when people tell you how you’ve inspired them or helped them and it made an impact.

Learn from your mistakes. Learn from every experience – good or bad. Learn from those around you. You never know what may shape and spark things to come … and remember to give back.

 

We Want YOU to Help the RDBMS Licensing Session at VMworld 2020

By: on August 3, 2020 in Licensing, SQL Server, Vmware, VMworld | No Comments

Hello everyone. Dave Welch from House of Brick (Blog | Twitter) and I will be co-presenting session HCP1634 “Licensing Oracle and SQL Server on vSphere” at VMworld 2020 this September. I will be focusing on SQL Server while Dave will be covering Oracle. Many of you may only use one of the database engines, while a fair number of you may have both in your data centers (or cloud providers). We know it’s going to be a great session but we want your input to make it even better.

Because VMworld 2020 is going to be virtual that means some sessions (including ours) will be pre-recorded. That puts a damper on interaction. We’d like to change that as much as possible and make it feel as if you were there in the proverbial room with us. Specifically, we would like to know some of your pain points around licensing SQL Server and/or Oracle in virtualized environments (on premises and/or cloud) as well as glean information about what you deploy so our session can reflect the audience. Normally we would poll the room at various points during the session, but that won’t be possible. We’re also taking questions that we’ll incorporate into the session as well. We cannot promise we will get to every single one, but historically, the licensing session has left quite a bit of time for Q&A, and we want to be able to still do that. The only way to make that happen is to gather them ahead of time.

How do you participate? It’s easy – take our survey and fill out the relevant bits that apply to you. Any names or e-mail addresses, organization name, and role information provided is completely optional. We will not use them for promotional purposes nor store them permanently. If we use your question, the name will allow us to possibly follow up should we need to do so.

The survey is live now and will close at midnight on August 11th. What are you waiting for? Go fill it out!

We thank you in advance and we will “see” you virtually at VMworld 2020.

Configure a WSFC in Azure with Windows Server 2019 for AGs and FCIs

By: on July 1, 2020 in Always On, AlwaysOn, Availability Groups, Failover Cluster Manager, FCI, Windows Server 2019, Windows Server Failover Cluster | No Comments

Over the past month or two, I’ve seen a lot of people run into a problem with Azure-based IaaS configurations that isn’t well documented and want to address it in this blog post. For those creating Always On Availability Groups (AGs) or Always On Failover Cluster Instances (FCIs) with a Windows Server Failover Cluster (WSFC) using Windows Server 2019 as the underlying OS, things have changed. There are other “problems”, but I’m specifically going to address one pain point: the distributed network name (DNN).

Introduced in Windows Server 2016, a DNN is another kind of name resource in a WSFC. It differs from a standard network name resource because it does not require an IP address. In Windows Server 2019, the name of the WSFC, which is also the Cluster Name Object (CNO) in Active Directory Domain Services (AD DS), can be created with a DNN. A DNN cannot be used to create the name resource for an FCI or an AG’s listener.

Figure 1 shows what a DNN looks like in Failover Cluster Manager (FCM).

Figure 1. Distributed network name in FCM

The new Windows Server 2019 DNN functionality does have a side effect that does affect Azure-based configurations. When creating a WSFC, Windows Server 2019 detects that the VM is running in Azure and will use a DNN for the WSFC name. This is the default behavior.

FCI Behavior

During Setup, you’ll get to the step cluster_failover_cluster_name_cluster_config_Cpu64 shown in Figure 3 where it’s configuring the network name resource.

Figure 3. Making progress, right?

Unfortunately, you will also get the error message in Figure 4 pop up. This error is fatal – you can’t create the FCI.

Figure 4. Not so fast …

The fix? Recreate the WSFC without a DNN. Before you do that, make sure you uninstall SQL Server cleanly otherwise that could be unpleasant.

Listener Behavior

With both SQL Server 2017 and 2019, creating a listener when the WSFC uses a DNN seems to work. I have no reason to believe it will not work with SQL Server 2016. However, there’s no official support statement from Microsoft on using the DNN-based WSFC with AGs, nor do I believe it was tested with them. Just because I have not encountered issues in my limited testing does not mean it is good or bad – I’m just relating my experience.

The Core of the Problem with Windows Server 2019

If you require a static IP address for a WSFC in Azure, the only way to create it is with PowerShell. This is the typical syntax for an AG. For an FCI, you would want storage, so leave out the -NoStorage parameter.

Figure 5 shows the output of the command

The WSFC creation process creates a WSFC with a DNN despite specifcying a name and IP address. The value for resource type is Distributed Network Name not Distributed Server Name as shown back in Figure 1. Once again, as with group/role, things are different in the UI vs. PowerShell. Get your act together, Windows!

Figure 5. It didn’t create what you thought it would

How Do You Create the WSFC without a DNN?

The answer is in this obscure blog post from Microsoft. That blog post links to a bunch of Youtube videos created by John Marlin who is on the HAS team of Windows Server. What you need is buried in Part 6. It’s a new switch/parameter in the New-Cluster PowerShell cmdlet called ManagementPointTypeNetwork that is not even documented there. I created a pull request to update the documentation yesterday.

ManagementPointTypeNetwork can have one of three values: Automatic (the default value, which detects if you are on premises or using another cloud provider, or you are doing this in Azure), Singleton (use a traditional WSFC name and IP address), and Distributed (use a distributed network name for the WSFC name).

To create a WSFC using a static IP address and a traditional WSFC name and IP address with no shared storage, here’s the syntax:

Figure 6 shows the output which is what we want to see – a traditional WSFC with a name and an IP address.

Figure 6. Much better

You also get a much more familiar display in FCM. I really with Microsoft would not use Server Name (or Distributed Server Name) under Core Cluster Resources since you don’t really access the WSFC directly. I get why it needs a name, but wouldn’t WSFC Name or Distributed WSFC Name be better here?

Figure 7. Traditional WSFC with a name and IP address

Postscript

Before anyone sends nastygrams or leaves comments, I was just showing the WSFC creation. There are more steps. You still need to add a witness resource, which up in Azure should be cloud witness. You may also need to create an ILB (or use an existing one) depending on your configuration. Both of those are outside the scope of what I’m covering in this post.

It’s my hope that the SQL Server dev team officially tests and certifies DNNs for use with AGs and FCIs. FCIs appears to require a fix to Setup. That means FCIs may not work until a CU or possibly vNext (if at all).

Update February 2021

Check out my blog post WARNING: Distributed Network Names and Distributed Availability Groups for some new and relevant information.

The TL;DR

Using a DNN right now may work for you depending on what you are deploying (AG or FCI). No matter what path you take, at least you now know how to deploy things. Happy clustering!

Configure SQL Server Backup to URL Using Azure Cloud Shell

By: on May 18, 2020 in Azure, Backups, SQL Server | No Comments

Happy Monday, everyone.

I had to do something this weekend that involved the SQL Server feature backup to URL which allows you to back up databases in a SQL Server instance directly to Azure blob storage. That PowerShell there is quite the adventure … and cumbersome. There’s a much easier way to get this done using Cloud Shell up in Azure. I used Bash, but I’m sure there’s an equivalent Azure CLI PowerShell way as well.

Backup to URL/Restore from URL works from Transact-SQL and the SQL Server PowerShell cmdlets. There is no UI option in SSMS. Figure 1 shows what happens when I tried to restore a valid backup stored in a URL in version 18.4. I hope this eventually gets fixed. For all graphics in this post, click on them to enlarge.

SSMS cannot use URL for restore

Figure 1 – Thanks for playing … don’t try again.

Storage Accounts and Access Keys

This post assumes you already have a storage account in Azure. To do nearly anything, you need one. You can create a special one just for backups if desired, but it’s not necessary. Work with your IT admins to see what the general direction is here if you already have a presence in Azure.

Whatever storage account you use, you’ll need three things: the name of the resource group that contains the storage account if using Cloud Shell, the storage account name, and the access key for the storage account. You can get the storage account name and the key from the Azure Portal. It can be found on the Access keys blade for the Storage Account as shown in Figure 2.

As an aside, the fact all of the menu things are not capitalized for the first letter completely drives me NUTS. Yes, I get that’s how things are today so get off my lawn. But if people stuck to it or used it consistently, I’d be fine – annoyed, but none worse the wear. Look at the screen shot below. Storage Explorer (which is a specific service in Azure) versus everything else (i.e. Access keys). Boy, that shift key is really, really hard for those extra words, no? Anyway …

Access Keys Example in Azure

Figure 2 – Access Keys in Azure.

If you don’t want to use bash to put the storage account key into a variable, you can just cut and paste from the Azure Portal. Stick it in a Notepad (or equivalent on your OS) document if you want that you don’t save; it’s just a file to keep open so you don’t have to keep navigating back to that blade.

To get the access key using in Bash and set it to a variable, execute the following command:

Tip: you could also put the storage account name into a variable since you’re going to use it a few times, but if you have it handy, it’s just as easy to cut and paste. Don’t overthink things.

Containers and the Shared Access Signature

The backup file you generate in SQL Server or restore from is stored in a container as a blob.

If there is already a container you will use, you can skip this step. If a container is not created already or you want to use a different one, create a container in the storage account. Below is the syntax for Bash:

Now you have to generate the shared access signature (SAS). This is how you will set up the secure access to the blob up in Azure. -o tsv is the same as –output tsv which takes care of the double quotes at the beginning and end of the output. Remember to set the value for the expiration date (–expiry) well into the future. The date is in UTC format.

A big long string of text like the one below is generated. After that happens, you use it in the Transact-SQL used to create the credential in SQL Server.

Create the SQL Server Credential

Creating the the credential in SQL Server is a simple Transact-SQL statement.

That’s it. You should be able to back up to and restore from a blob in Azure. Figure 3 shows a successful database backup of WideWorldImporters to Azure using Transact-SQL.

Backup to URL is successful

Figure 3- It really works!

The backup file WideWorldImporters.bak can be seen in Figure 4.

WideWorldImporters in Azure blob storage

Figure 4 – WideWorldImporters in Azure blob storage

The error message below appears if something is configured wrong.

Chances are the problem was between the brain and the keyboard, such as fat fingering the name of the container or storage account. Some examples include extra characters at the beginning of the SAS such as a question mark (?) or having double quotes before and after the SAS.

Why Use Cloud Shell and Bash?

In my experience, it is many less lines of code. If you do not have to create the container and want to just cut and paste the storage account name and access key, it is literally one line of code you need to write to generate the SAS required for the credential in SQL Server.

Hope this helps some of you out there.