By: Allan Hirt on January 27, 2020 in Data | No Comments
Today is the 75th anniversary of the liberation of Auschwitz. Auschwitz had more than one camp and this post is not about those details, the camp system itself, or things like that. It is also not a commentary on politics, corporations, religion, war, or any topic orbiting them. I’ve visited both Auschwitz and Dachau and blogged about my visit to Dachau in 2012. Auschwitz affected me in different ways and was its own distinct and sad experience for other reasons. It is profound and life changing.
Data drives the modern world – now more than ever. This is true in our personal as well as professional or business lives. Examples include how you invest money or how a company makes decisions. A lot of choices are based on information which is just another word for data. Some degree of luck and intuition can help, too.
Much of today’s information sits in databases (and not just in SQL Server-based ones). Queries and tools extract and present said data. It can be analyzed, filtered, and otherwise processed in numerous ways. Any data professional who has written queries knows that changing, say, a WHERE clause can make a huge difference in the result.
We tend to think of computers and mining data as recent innovations. They existed in more “primitive” ways before what we think of as modern computing . One early tool was the Hollerith Electric Tabulating System, first used for the 1890 US Census. It was later extended to use things like automatic card feeding as well as incorporated the first keypunch. The company which made those machines was combined with others in 1911 and renamed International Business Machines Corporation (IBM) in 1924. You may have heard of them.
Databases, relational or not, did not exist in the same way then as we think of them today. Those punch cards effectively made up what is similar to a database today. The reason I am specifically referencing the Hollerith is that those machines were used as part of the data-driven mechanism behind the tragedy in Europe which I first learned in Edwin Black’s excellent book IBM and the Holocaust. I highly recommend reading it.
I recently learned the US via the War Relocation Authority (WRA) also employed Hollerith machines during World War II. The WRA was responsible for the relocation and detainment of Japanese Americans. There’s no nice way to say that. IBM was subcontracted for that work. Accounting Information published an article about all of this in Volume 33 Number 1. The section “Big Brother Is Watching” discusses how the data was used, analyzed, and ultimately, reported. Here is a snippet:
His grand design for 1943 was a “locator file” in which would appear a Hollerith alphabetic punch card for each evacuee. These cards were to include standard demographic information about age, gender, education, occupation, family size, medical history, criminal record, and RC location. However, additional data categories about links to Japan were also maintained, such as years of residence in Japan and the extent of education received there.
Here is a what a Hollerith card looked like courtesy of Wikipedia.
Hollerith punch card
Today many of us are annoyed by targeted ads, most of which stem from mined data about where you were looking online or by using data companies already have about you. American Express was recently able to identify fraudulent charges on my account due to data they have based on my history with them. Mining data is not inherently bad. How that data is used is where responsibility comes into play. There is a reason laws like HIPAA exist in the US.
As we commemorate this day which should be both celebrated for the aspect of freedom as well as respected for its remembrance of a terrible chapter in human history, I ask you to reflect on the use of data and both the responsibility and relationship to it. How do you deal with it in your job? In your personal life? Those are heady questions for a Monday, but it is an appropriate day to contemplate such weighty topics.
By: Allan Hirt on January 17, 2020 in Security, SQL Server | No Comments
I’ve been hearing more and more from people I know as well as news stories that security is on people’s minds in one way or another. It’s a fundamental IT tenet that security should be a priority. With the rise of ransomware as well as the number of hacks and data leaks going up, things have reached a fever pitch – and I do not think we have hit a peak yet.
The reason I’m writing this blog is because every data breach and ransomware incident becomes an availability problem in one way or another. Mission critical encompasses many things – including both security and availability. Until recently not many saw the two as being in the same boat. Welcome to my world. Come in, have a dip in the pool – the water’s fine!
Let’s take the example of one of the most recent victims of ransomware, Travelex, one of the world’s largest travel-related companies. Travelex was hit with ransomware on December 31, 2019. Happy New Year, right? It hasn’t been for Travelex. They do business with lots of people including direct consumers. One of their businesses is money exchange at airports – I’ve seen their kiosks and storefronts at numerous ones including Heathrow.
They have been silent until today. On their customer information page in the UK about the incident, a video was posted from Tony D’Souza, the CEO of Travelex. First, taking nearly three weeks for a response to a very public incident is not necessarily a good thing in my estimation. I get the need to deal with things. I’m not perfect in all my communications but I’m also not Travelex. A big part of these incidents is incident management – including the public side of things. The customer page is reach, but I’m talking about the overall PR effort.
As of today, January 17, 2020, the screen grab below is their website’s front page in the USA (click to make larger).
The Travelex home page as of 1/17/20
They are partially back up and working. According to a BBC report,
However, while he said the system used by staff is now working, there was no word on when the firm’s main UK website would be returned to service.
That means customers are still unable to order currency online, either from Travelex itself or through the network of banks that use its services, including Barclays, Lloyds, RBS, and the finance websites of Sainsbury’s and Tesco.
Ouch. That is a lot of lost business for their partners AND Travelex themselves with seemingly no end in sight. You get the idea.
Security starts well before someone accesses a databases or a system. FedEx had an issue a few years ago that cost them hundreds of millions of dollars because it got in through a subsidiary in Europe. IT needs to be able to cope with the modern threats. Most do a great job, but sadly, there will always be one attack vector you may not have thought of. Covering as many as you can is important. Not doing anything or saying, “No one will care” should have people polishing their resumes. Just look at the City of Baltimore which got hit with ransomware; it cost them at least $18.2 million.
Security is more than worrying about PII, credit card numbers, HIPAA, but as data professionals, that is one of our primary concerns. For every company that worries about what port to configure for SQL Server and frets that 1433 is insecure, worry about bigger problems like prioritizing the security of data at rest (including backups) as well as on the wire. A port scanner will find that SQL Server instance pretty quickly. Obfuscation may slow a hacker down and annoy them, not stop them. Put some effort into a robust backup strategy that has your backups safe, offsite, and tested so you know you can restore them if you get hit with something like ransomware. Even if you secure SQL Server but you have a ransomware incursion, you still may need those backups if you can’t access the systems. Everything matters here.
We always take security into account when we’re helping our customers design or evaluate solutions. Security is multi-layered. You do not want to let a security problem become an availability issue. Could your business survive being nearly completely down for over two weeks and counting like Travelex? I doubt it. Want to make sure you never find out the answer to that question? Contact us today.
By: Allan Hirt on January 15, 2020 in Advice, T-SQL Tuesday | 2 Comments
Back in 2015, I wrote the blog post In the End – Life (and IT) Lessons from Rush after I had seen what wound up being their last live show at the Forum in Los Angeles. It’s still one of my more popular blog posts, and in my opinion, one of the better ones I’ve authored.
I’ve referenced Rush before in blogs (such as Fun With Naming Conventions). If you’ve ever seen me present, I always put music references (among other things) in my demos. I often have a three-node AG with node names of Geddy, Alex, and Neil. You get the picture. I was first turned on to Rush via MTV in 1981 or 82 with the videos from Moving Pictures, so they’ve been a part of my life pretty much from the moment I started playing bass. Exit Stage Left was the first Rush album I learned to play back to front. Rush in one way or another has been part of my life now for nearly 40 years. Very few bands have stuck with me like Rush has (there are a few, and most people know Styx is one; that’s a story for another time).
Rush was a phoenix. After Neil’s tragedies post-Test for Echo, there was a high probability Rush would not play again. Between losing his daughter Selina in a car accident and his wife Jackie to cancer just about a year later, would you blame Neil for walking away forever? When Vapor Trails came out, it spawned a triumphant second act for Rush that in some was was probably more satisfying. Rush earned their retirement and then some.
On January 7, Neil Peart passed away after battling Glioblastoma, brain cancer, for about three and a half years. The news was announced last Friday the 10th and needless to say, it hit me like a ton of bricks. The last time I was this affected was from the death of my friend Mike who I’ve blogged about fairly extensively here (two examples Life Is Fragile and Some Anniversaries Are Just Not Happy Days). There are two similarities between the two: their passing were unexpected and sudden and both were WAY too young. Sure, Neil was 67, but these days, that’s really not old. It’s funny how when you are in your teens or twenties 30- or 40-something seems ancient. When I first saw Rush in December of 1987 at the venerable Spectrum (RIP) in Philadelphia, Neil was just barely 35. 35! I’m almost 50 now. 35 is a “baby”.
After Mike’s death and now Neil’s, it really brings the point home you have to make the most of your time on this planet. In the past few years I’ve been doing a bit of a self-reassessment and addressing things if needed. I’ve written before about how self care is important (one example: A Letter to Myself at 20). It shouldn’t take someone passing to put things in focus; sadly it sometimes does. Most of us get caught up and then time just flies by. When we stop to take a breath, sometimes we realize we got off track. If we screw up (and heaven knows I have), all you can do is own it and try to fix it. Getting back on course takes time. The results might even be better.
In a way, it’s crazy to feel so sad and mourn for someone you never met. Yet Neil (along with Geddy and Alex) have touched my life in so many ways for so long. I’m not the only one given all the tributes which have been from all walks of life. Very few knew Neil was sick. Some have admitted they did after the fact (such as his longtime drum tech Lorne Wheaton). You’ve got a good support system when something that major does not leak in three and a half years. I can’t imagine how Geddy or Alex felt every time they were asked about a possible Rush reunion knowing what they knew. Looking back at their statements – especially Geddy – he says it without saying it. Neil stopped publishing on his website about the time it seems he got the diagnosis. The bread crumbs were there; we just didn’t know.
Neil dealt with his cancer and his ultimate passing in the way he lived life. He was an intensely private person but at the same time, a fighter. The prognosis for Glioblastoma is grim and he outlasted the odds. Arguably his main reason for walking away in 2015 was to be there for his daughter Olivia. He got a second chance at life with Carrie which makes this all the more cruel. The last song on Rush’s last album Clockwork Angels, “The Garden”, always reminded me in a way of Genesis’ “Fading Lights” from We Can’t Dance – a goodbye. It’s a very poignant song besides being one of the more beautiful ones they’ve written. You can bingoogle the full lyrics (Neil was the lyricist for Rush besides being the drummer if you didn’t know), but this one always sticks with me:
“The measure of a life is a measure of love and respect”.
I think Neil would downplay it, but he had that love and respect in spades. Just look at the outpouring of tributes that started on Friday. Rest in peace, Professor. I hope Jackie and Selina welcomed you with open arms. My condolences to Carrie and Olivia, Geddy and Alex, and Neil’s friends and family. I hope all the tributes to him and his enduring work will somehow comfort you and know that he will never be forgotten.
Neil’s life is a lesson which can be tied to this month’s T-SQL Tuesday topic, “Imposter Syndrome“. Neil was a lifelong student. Even by the mid-80s, many considered him to be one of the greatest rock drummers of all time. What did he do? He took drum lessons as time went on from Freddie Gruber and Peter Erskine. Your drummer’s favorite drummer took private classes! At the same time, he was a teacher. His instructional videos are some of the best selling ones for drums of all time. The two are not mutually exclusive. Don’t believe me? Read this page of tributes at Hudson Music, the publisher of Neil’s videos, especially from Mr. Erskine.
What I hate is when someone wants to play “stump the chump”. Simply put, it is when someone asks you a question to intentionally show they are “smarter” than you. Look, I remember a lot of things. I even joke I’ve forgotten more about clusters than some have even learned, but I’m also not a walking encyclopedia or Wiki, either. There’s a reason Books Online and documentation exists. If you got me to admit “I don’t know” – words everyone needs in their vocabulary – congrats, jerk. I’m human just like you. I put my pants on like everyone else. I remember sitting at PASS Summit a few years ago and behind me surprised I was right in front of them. Um, what? I’m just a normal person and not sitting on a throne. I turned around, introduced myself and we had a great chat. A bit like Neil, I would rather talk about pretty much anything else than SQL Server when I’m not formally having to talk about it. There’s more to life than WSFCs, Pacemaker, AGs, and FCIs.
Neil always wanted to improve. I think the same way. I never think I know all the answers and the older I get, sometimes I feel the less I know. Believe it or not, I’m even wrong sometimes! Own it. I do. I’m constantly learning new things and am in awe of those who do other things such as esoteric performance tuning just as I’ve had people say they don’t know how they can do what I do when it comes to HA and are amazed at my skills. It’s humbling when people say you’ve influenced them or come up and say one thing you said made a difference. Like Neil, I try to pass what I know on, including from mistakes and failure. Successful people fail all the time. Failure or not knowing something is not a sign of weakness; far from it.
This doesn’t mean successful people will not have a bit of an ego or strong opinions; far from it. They just are open to feedback, criticism, and improvement. Sometimes you need to reinvent yourself even if no one knows it like Neil did with his drumming in the 90s. I’ve done it myself in my career even if no one noticed. If I was the same person I was 20+ years ago, never absorbed feedback – good and bad – I’d be out of a job. Does that mean I’m perfect? No! To quote the title of one of Neil’s videos, I’m a “work in progress”. If that makes me an imposter, I’m proud to be the founding member of the club.
The moral of the story: be like Neil. Be humble. Be a student and learn from those around you. Ask questions. Listen more than you speak. If you’re not doing any of these things, you are the imposter.
By: Allan Hirt on December 16, 2019 in Conference, Pre-conferece, SQLbits | No Comments
Happy Monday, everyone. I’ve been heads down with both teaching and customer work, but wanted to pop up to make an exciting announcement: I’m honored to have been chosen once again to deliver a Training Day at SQLBits 2020 which will be in London.
This year’s Training Day will be SQL Server Cloud Fundamentals which is scheduled for Tuesday, March 31. I’ll be covering not only Azure, but Amazon AWS (EC2 and RDS), Google Cloud Platform, and hybrid scenarios – yes, even if you want to partially stay on premises, I’ve got you covered.
I’ve been in touch with the organizers, and right now all things are green for doing the lab that I’ve planned which is a lot of fun (no, really …). You can’t beat hands on experience! If you’ve ever taken one of my classes or been in a previous pre-con at another conference or a Training Day, you know my labs are designed to reinforce the concepts taught that day. No prior experience with any public cloud is necessary.
So my Training Day will be good for those who are not only new to any cloud, but those looking to maximize their existing experiences to be better at what they do.
If you don’t want to miss out on actual hands on experience with cloud stuff in addition to the instruction, reserve your spot early. If you sign up, you’ll get an e-mail closer to the date with instructions on what you’ll need to bring with you to do the lab.
I look forward to seeing you in London at SQLBits.
By: Allan Hirt on November 7, 2019 in Azure, FCI, High Availability, Licensing, Shared Disk, SQL Server, SQL Server 2019 | No Comments
Hello everyone. I have not had much time to blog, but there are a few things that you should be aware of and that I have an opinion about. Both were recently announced – one before Ignite and PASS Summit and the other this week somewhat quietly in a session at Ignite.
Big Deal #1 – Azure Shared Disks for FCIs (and One More Thing …)
If you have ever heard me present, you will know I am not the biggest fan of deploying Always On Failover Cluster Instances (FCIs) in non-physical environments. That includes in any of the public clouds. Why? There are quite a few reasons which I will not go into here, but I will discuss the biggest one: shared storage. Configuring shared storage is awful in any of the public clouds. Sure, if you use something like VMware’s solutions in any of the public clouds you can take advantage of vSAN, but that will not work for most folks. I’ve been VERY vocal how much the Storage Spaces Direct solution is currently a fairly terrible experience at least in Azure. SIOS DataKeeper is a good solution, but for some, they do not want to purchase anything else; they want a workable, native solution. Sure, you can use iSCSI but why would you?
You get the idea. It kinda sucks right now (forget the fact you probably should not be doing it to begin with). I’ve had conversations with both the SQL Server and Windows Server dev teams about this. They know my displeasure.
Maybe they finally heard me. At Ignite during BRK3253 – Windows Server on Azure overview: Lift-and-shift migrations for enterprise workloads (man, I hate how MS does not capitalize titles ….), Elden Christensen and Rob Hindman from the Windows Server dev team (both of whom I know very well) announced the private preview of Azure Shared Disks (it’s apparently being renamed, and isn’t Shared Azure Disks) and show a non-SQL Server-based demo. If you want to skip ahead, they talk about it at 22:34. This will not change my opinion on using FCIs up in Azure, but it will sure as heck make it easier to deploy. As soon as I can test, I’ll report back. There is another session BRK3283 Optimize price-performance and lower TCO for your workloads with the next-gen Azure Disks (not live as of this posting) which will cover more of the Azure disk stuff.
One more recent change which I will be most likely showing and talking about for the first time in a few weeks at my SQL Server LIVE! precon is the ability to use Azure Files for FCIs.
I would recommend you also have a look at the Ignite sessions from John Marlin to see and hear about anything new coming in WSFCs in Windows Server vNext LTSC. The biggest announcement I’m aware of that could affect SQL Server (and specifically FCIs) is the ability to stretch a WSFC with Storage Spaces Direct (S2D). THR2155 I can’t currently watch (network error on the Ignite site), so I haven’t been able to ascertain what else could impact SQL Server. If there’s something else, I’ll either update this post or do a new one if it’s really major.
Big Deal #2 – Changes to Licensing for Availability Scenarios
On October 30th, Microsoft let the cat out of the bag that licensing rules for the availability scenarios were changing as of November 1, 2019 if you have Software Assurance (SA). We always tell our customers that SA is often worth it. You can see the full new rules in the SQL Server 2019 licensing guide. The rules apply to all supported versions of SQL Server.
Let that sink in: supported versions of SQL Server. What does that mean? For example, if you are using SQL Server 2016 with Service Pack 1 whose support ended on July 9, 2019, you would need to go to Service Pack 2 or later. Microsoft is not going to come in and crash your party, but if they audit you or you are going through a true-up for licensing, it could bite you. This means you need to stay current to take advantage of these benefits.
I’m going to discuss what I feel are the biggest game changers. I knew licensing was changing as I had conversations with Microsoft around this months ago. I was not sure what the final result was going to be, but I’m fairly pleased. Is it perfect? No, but it’s much better than it was.
It’s interesting to note that for licensing purposes, Microsoft defines HA as a synchronous replica with automatic failover and D/R as an asynchronous replica with manual failover.
Change #1 – Backups and DBCC on a Secondary Replica Is Now Free
I still hate the use of the words active and passive, but that’s a whole other blog post. In talking with Microsoft, there are other reasons some of these terms are used, but again, irrelevant at the moment. Taking terminology out of the picture, with SA, you can now generate full, copy only backups, transaction log backups, and run DBCC on a secondary replica of an AG without incurring a license. That of course assumes no other databases are live on that instance. That is a huge change because through SQL Server 2017, those were considered “use” and you had to license. This is one thing that should have been there since day one, but I’m just glad they finally corrected their stance.
Change #2 – Allowance for Disaster Recovery Testing
This one is buried in the licensing document, but very big. Here’s what it says:
Customer may also run primary and the corresponding disaster recovery replicas simultaneously for brief periods of disaster recovery testing every 90 days.
I can’t emphasize how huge this is. For example, there was always a question if you were using something like VMware’s Site Recovery Manager (SRM) as your primary D/R method. Now that is covered in pretty clear language. Thank you, Microsoft.
Change #3 – Disaster Recovery Is Now Covered (To a Point …)
Prior to these changes, you only got one “free” replica. If you had an AG with three replicas, two local for HA and one remote for D/R, you paid for two of those. With this licensing change, both the HA and D/R replicas would be covered if you have SA. This is huge.
Change #4 – Azure Disaster Recovery Replicas Are Covered via the Azure Hybrid Benefit
This is related to #3. I’m sure this one will irk some, and if so, please comment below. I know Microsoft is listening 🙂 If Azure is your cloud provider and you want to stick a replica for your AG up in the cloud that is just for disaster recovery, you are now covered with SA. This was not the case before. This new benefit does not apply if you are using Amazon Web Service or Google Cloud.
With this benefit, you can technically have up to three passive replicas (one on premises for HA, one one premises for D/R, and one up in Azure for D/R). Anything else would still seemingly need to be paid for. So if you have two D/R replicas in different data centers (for example, your main data center is in Boston, and you have D/R replicas in London and Chicago), you are going to pay for one of those. That is still better than paying for two.
Things That Could Use Clarification
Due to the “standardization” of terminology, FCIs, AGs, and log shipping are basically treated the same in this document. That’s confusing. Microsoft says this:
Each of these implementations uses different terminology. The examples that follow use ‘Active’ as the read-write database or instance (also referred to as Primary Replica in an Always On Availability Group) and ‘Passive’ for the write only database or instance (marked to not-read, referred to as Secondary Replica in an Always On Availability Group).
But that’s not the terms we use for FCIs in technical documentation. For example, FCIs do not have “write only instances”. You install the binaries on other nodes of a Pacemaker cluster or WSFC and then the FCI can fail over to it. It’s not a “live” instance. I know this is probably not changing because documents like this go through legal reviews and has to match other similar documents/licensing terms for things like Windows, but it irks me to no end.
Also, I am wondering how all of this affects distributed availability groups and how the forwarder is considered. Is it passive or active from a licensing perspective? I’m hoping they clarify that.