It was the fall of 1991. I applied for and landed a quality assurance internship at SQL Solutions in Burlington, MA which was located at 8 New England Executive Park right next to the Burlington Mall. SQL Solutions was purchased by Sybase not long after. I kept that internship through the end of graduation in 1994 and there’s no question it literally set up the rest of my professional career.
I always find it interesting how one thing, one event, one interaction can sometimes change the course of your life – good or bad. I’ve been on both sides of that.
I always went home in the summertime, but when back in MA for college, I was at Sybase twice a week (Tuesday and Thursday if memory serves me correctly). Two of the people I worked closely with and were my direct supervisors were women I still have the utmost respect for today. I’ve never seen gender, color of someone’s skin, religion, etc., as a problem then or now. No one should. I learned a lot in those two years, and I am forever thankful to everyone I worked with at Sybase. In fact, one of the people I met I wound up playing years later in various jazz ensembles (he’s a guitar player). Small world.
Part of me thought I’d land a permanent job at Sybase, but that wasn’t meant to be, and I believe things worked out how they should have. The skills I acquired in those two years gave me the foundation to land my first job post-college doing QA using FoxPro (then Visual FoxPro) for a company that did medical software. In reality, it set me up for life since I’m still tinkering with databases today.
In a harbinger of things to come, I once got in trouble in that job because I automated part of what I was supposed to do using Visual Basic and Excel macros. The results were predictable, so why would I eyeball that? Let a process tell me if it was good or bad. Was I DevOps before DevOps was cool? (kidding here, folks) That little “stunt” landed me on a PIP which I survived but it taught me a valuable lesson.
I can see why automating rubbed some the wrong way – here’s this young upstart at the tender age of maybe 23 trying to tell us how to do our job. I was just trying to free my time up to be more efficient and save my time for the stuff that needed my attention. It’s funny how automation and other things are commonplace now but were not then.
The events of the past year when we’ve all been stuck at home has given us time to reflect and see how we want or need to move forward and improve. I have always looked forward, but that doesn’t mean you can’t look to the past for inspiration. I look fondly on the early years of my career. In some ways I’ve come so far, but in others, I haven’t gone very far at all. I think I found my calling early if I wasn’t going to be a full-time musician but I marched to the beat of my own drummer and I’d like to think I still do to a degree. I’m certainly older. Wiser and smarter? I hope so. Better filter? Those who know me will have varying opinions here.
I’d like to think my success to date has outweighed my failures and shortcomings. I’m a work in progress. We all are. This is why even 30 years in, I try my best to learn and do new things knowing in some cases it will be difficult or I may be ahead of the curve.
A recent interview with Tomo Fujita by Mary Spender struck a huge chord with me. Paraphrasing what Tomo said at one point is this: teachers are not better than students and can learn from them even though they are still helping the student. This one quote stuck with me: “(A) teacher is just a great student.” Fast forward to about 8:37 if you want to see that interaction. It’s such a good interview where even if you are not a musician, you can apply a lot of what Tomo says to your life beyond that little bit.
I’m no longer that early 20-something but I learn something from all of my customers and students (including anyone who attends a webinar, etc.). I may not be the smartest person in the room and I’m totally fine with that but I’m at the proverbial table for a reason and it’s not just my boyish good looks and charm (laugh – it’s a joke). I’ll never try to be all things to all people; it’s impossible. I know my limitations. I can only attempt be the best me I can be at any given moment.
That 20-something also couldn’t have imagined one internship would one day allow him to travel the world and speak to, work with, and teach thousands of people. There are very few things as humbling when people tell you how you’ve inspired them or helped them and it made an impact.
Learn from your mistakes. Learn from every experience – good or bad. Learn from those around you. You never know what may shape and spark things to come … and remember to give back.
Hello everyone. Dave Welch from House of Brick (Blog | Twitter) and I will be co-presenting session HCP1634 “Licensing Oracle and SQL Server on vSphere” at VMworld 2020 this September. I will be focusing on SQL Server while Dave will be covering Oracle. Many of you may only use one of the database engines, while a fair number of you may have both in your data centers (or cloud providers). We know it’s going to be a great session but we want your input to make it even better.
Because VMworld 2020 is going to be virtual that means some sessions (including ours) will be pre-recorded. That puts a damper on interaction. We’d like to change that as much as possible and make it feel as if you were there in the proverbial room with us. Specifically, we would like to know some of your pain points around licensing SQL Server and/or Oracle in virtualized environments (on premises and/or cloud) as well as glean information about what you deploy so our session can reflect the audience. Normally we would poll the room at various points during the session, but that won’t be possible. We’re also taking questions that we’ll incorporate into the session as well. We cannot promise we will get to every single one, but historically, the licensing session has left quite a bit of time for Q&A, and we want to be able to still do that. The only way to make that happen is to gather them ahead of time.
How do you participate? It’s easy – take our survey and fill out the relevant bits that apply to you. Any names or e-mail addresses, organization name, and role information provided is completely optional. We will not use them for promotional purposes nor store them permanently. If we use your question, the name will allow us to possibly follow up should we need to do so.
The survey is live now and will close at midnight on August 11th. What are you waiting for? Go fill it out!
We thank you in advance and we will “see” you virtually at VMworld 2020.
Over the past month or two, I’ve seen a lot of people run into a problem with Azure-based IaaS configurations that isn’t well documented and want to address it in this blog post. For those creating Always On Availability Groups (AGs) or Always On Failover Cluster Instances (FCIs) with a Windows Server Failover Cluster (WSFC) using Windows Server 2019 as the underlying OS, things have changed. There are other “problems”, but I’m specifically going to address one pain point: the distributed network name (DNN).
Introduced in Windows Server 2016, a DNN is another kind of name resource in a WSFC. It differs from a standard network name resource because it does not require an IP address. In Windows Server 2019, the name of the WSFC, which is also the Cluster Name Object (CNO) in Active Directory Domain Services (AD DS), can be created with a DNN. A DNN cannot be used to create the name resource for an FCI or an AG’s listener.
Figure 1 shows what a DNN looks like in Failover Cluster Manager (FCM).
The new Windows Server 2019 DNN functionality does have a side effect that does affect Azure-based configurations. When creating a WSFC, Windows Server 2019 detects that the VM is running in Azure and will use a DNN for the WSFC name. This is the default behavior.
During Setup, you’ll get to the step cluster_failover_cluster_name_cluster_config_Cpu64 shown in Figure 3 where it’s configuring the network name resource.
Unfortunately, you will also get the error message in Figure 4 pop up. This error is fatal – you can’t create the FCI.
The fix? Recreate the WSFC without a DNN. Before you do that, make sure you uninstall SQL Server cleanly otherwise that could be unpleasant.
With both SQL Server 2017 and 2019, creating a listener when the WSFC uses a DNN seems to work. I have no reason to believe it will not work with SQL Server 2016. However, there’s no official support statement from Microsoft on using the DNN-based WSFC with AGs, nor do I believe it was tested with them. Just because I have not encountered issues in my limited testing does not mean it is good or bad – I’m just relating my experience.
The Core of the Problem with Windows Server 2019
If you require a static IP address for a WSFC in Azure, the only way to create it is with PowerShell. This is the typical syntax for an AG. For an FCI, you would want storage, so leave out the -NoStorage parameter.
[code]New-Cluster -Name WSFCName -Node nodelist -StaticAddress IPAddress -NoStorage -AdministrativeAccessPoint DNS[/code]
Figure 5 shows the output of the command
[code]Get-ClusterGroup “Cluster Group” | Get-ClusterResource[/code]
The WSFC creation process creates a WSFC with a DNN despite specifcying a name and IP address. The value for resource type is Distributed Network Name not Distributed Server Name as shown back in Figure 1. Once again, as with group/role, things are different in the UI vs. PowerShell. Get your act together, Windows!
How Do You Create the WSFC without a DNN?
The answer is in this obscure blog post from Microsoft. That blog post links to a bunch of Youtube videos created by John Marlin who is on the HAS team of Windows Server. What you need is buried in Part 6. It’s a new switch/parameter in the New-Cluster PowerShell cmdlet called ManagementPointNetworkType that is not even documented there. I created a pull request to update the documentation yesterday.
ManagementPointTypeNetwork can have one of three values: Automatic (the default value, which detects if you are on premises or using another cloud provider, or you are doing this in Azure), Singleton (use a traditional WSFC name and IP address), and Distributed (use a distributed network name for the WSFC name).
To create a WSFC using a static IP address and a traditional WSFC name and IP address with no shared storage, here’s the syntax:
[code]New-Cluster -Name WSFCName -Node nodelist -StaticAddress IPAddress -NoStorage -AdministrativeAccessPoint DNS -ManagementPointNetworkType Singleton[/code]
Figure 6 shows the output which is what we want to see – a traditional WSFC with a name and an IP address.
You also get a much more familiar display in FCM. I really with Microsoft would not use Server Name (or Distributed Server Name) under Core Cluster Resources since you don’t really access the WSFC directly. I get why it needs a name, but wouldn’t WSFC Name or Distributed WSFC Name be better here?
Before anyone sends nastygrams or leaves comments, I was just showing the WSFC creation. There are more steps. You still need to add a witness resource, which up in Azure should be cloud witness. You may also need to create an ILB (or use an existing one) depending on your configuration. Both of those are outside the scope of what I’m covering in this post.
It’s my hope that the SQL Server dev team officially tests and certifies DNNs for use with AGs and FCIs. FCIs appears to require a fix to Setup. That means FCIs may not work until a CU or possibly vNext (if at all).
Update February 2021
Check out my blog post WARNING: Distributed Network Names and Distributed Availability Groups for some new and relevant information.
Using a DNN right now may work for you depending on what you are deploying (AG or FCI). No matter what path you take, at least you now know how to deploy things. Happy clustering!
Happy Monday, everyone.
I had to do something this weekend that involved the SQL Server feature backup to URL which allows you to back up databases in a SQL Server instance directly to Azure blob storage. That PowerShell there is quite the adventure … and cumbersome. There’s a much easier way to get this done using Cloud Shell up in Azure. I used Bash, but I’m sure there’s an equivalent Azure CLI PowerShell way as well.
Backup to URL/Restore from URL works from Transact-SQL and the SQL Server PowerShell cmdlets. There is no UI option in SSMS. Figure 1 shows what happens when I tried to restore a valid backup stored in a URL in version 18.4. I hope this eventually gets fixed. For all graphics in this post, click on them to enlarge.
Storage Accounts and Access Keys
This post assumes you already have a storage account in Azure. To do nearly anything, you need one. You can create a special one just for backups if desired, but it’s not necessary. Work with your IT admins to see what the general direction is here if you already have a presence in Azure.
Whatever storage account you use, you’ll need three things: the name of the resource group that contains the storage account if using Cloud Shell, the storage account name, and the access key for the storage account. You can get the storage account name and the key from the Azure Portal. It can be found on the Access keys blade for the Storage Account as shown in Figure 2.
As an aside, the fact all of the menu things are not capitalized for the first letter completely drives me NUTS. Yes, I get that’s how things are today so get off my lawn. But if people stuck to it or used it consistently, I’d be fine – annoyed, but none worse the wear. Look at the screen shot below. Storage Explorer (which is a specific service in Azure) versus everything else (i.e. Access keys). Boy, that shift key is really, really hard for those extra words, no? Anyway …
If you don’t want to use bash to put the storage account key into a variable, you can just cut and paste from the Azure Portal. Stick it in a Notepad (or equivalent on your OS) document if you want that you don’t save; it’s just a file to keep open so you don’t have to keep navigating back to that blade.
To get the access key using in Bash and set it to a variable, execute the following command:
[code]storageaccountkey= $(az storage account keys list -n “storage_account_name” -g “resource_group_name” –output tsv –query “[?keyName == ‘key1’].value”)[/code]
Tip: you could also put the storage account name into a variable since you’re going to use it a few times, but if you have it handy, it’s just as easy to cut and paste. Don’t overthink things.
Containers and the Shared Access Signature
The backup file you generate in SQL Server or restore from is stored in a container as a blob.
If there is already a container you will use, you can skip this step. If a container is not created already or you want to use a different one, create a container in the storage account. Below is the syntax for Bash:
[code]az storage container create –name “container_name” –account-name “storage_account_name” –account-key $storageaccountkey –fail-on-exist[/code]
Now you have to generate the shared access signature (SAS). This is how you will set up the secure access to the blob up in Azure. -o tsv is the same as –output tsv which takes care of the double quotes at the beginning and end of the output. Remember to set the value for the expiration date (–expiry) well into the future. The date is in UTC format.
[code]az storage container generate-sas -n “container_name” –account-name “storage_account_name” –account-key $storageaccountkey –permissions “rwdl” –expiry “2020-05-31T00:00Z” -o tsv[/code]
A big long string of text like the one below is generated. After that happens, you use it in the Transact-SQL used to create the credential in SQL Server.
Create the SQL Server Credential
Creating the the credential in SQL Server is a simple Transact-SQL statement.
[code]IF NOT EXISTS
( SELECT * FROM sys.credentials
WHERE name = ‘https://storage_account_name.blob.core.windows.net/container_name’)
CREATE CREDENTIAL [https://storage_account_name.blob.core.windows.net/container_name]
WITH IDENTITY = ‘SHARED ACCESS SIGNATURE’,
SECRET = ‘se=2020-05-31T00%3A00Z&sp=rwdl&sv=2018-11-09&sr=c&sig=%2BRUJldVzQHFdh7lZAeynfRuZr5dUeUohfVwKHZZX3jE%3D’
That’s it. You should be able to back up to and restore from a blob in Azure. Figure 3 shows a successful database backup of WideWorldImporters to Azure using Transact-SQL.
The backup file WideWorldImporters.bak can be seen in Figure 4.
The error message below appears if something is configured wrong.
[code]Msg 3201, Level 16, State 1, Line 33
Cannot open backup device ‘https://storage_account_name.blob.core.windows.net/container_name/WideWorldImporters.bak’. Operating system error 50(The request is not supported.).
Msg 3013, Level 16, State 1, Line 33
BACKUP DATABASE is terminating abnormally.[/code]
Chances are the problem was between the brain and the keyboard, such as fat fingering the name of the container or storage account. Some examples include extra characters at the beginning of the SAS such as a question mark (?) or having double quotes before and after the SAS.
Why Use Cloud Shell and Bash?
In my experience, it is many less lines of code. If you do not have to create the container and want to just cut and paste the storage account name and access key, it is literally one line of code you need to write to generate the SAS required for the credential in SQL Server.
Hope this helps some of you out there.