Blog

Notre Dame – Disaster Recovery in Action

By: on April 16, 2019 in Business Continuity, Disaster Recovery, High Availability, Notre Dame | No Comments

Yesterday, a fire broke out at Notre Dame in Paris which caused major damage to the well-known landmark and religious icon. While it is certainly a tragedy, it could have been much worse. Why? There was a protocol for dealing with it. Let me explain and give a little background.

In the world of IT, we talk about the concepts of high availablity (HA) and disaster recovery (DR). Both may use similar techniques, but are very different exercises. High availablity is about surviving a smaller, more local failure with minimal to no downtime and impact. Disaster recovery is what you do when a catastrophic even occurs. Both of these fall under the auspices of business continuity. Business continuity relies on people, process, and technology, but should never be driven by a specific feature. It should be requirements-based. Without people, processes do not work – especially in the case of DR where human intervention is usually required. These concepts are universal and when they are implemented properly, work well.

In Paris, all of that all came together, According to the Twitter thread I linked above, there has been a protocol (read: process) in place since the French Revolution if fire broke out at Notre Dame: save the people first, then save the art, altar, and furniture after, finally landing on saving the structure. People cannot be replaced. Original objects cannot, and to a degree, buildings cannot even though they can be rebuilt. That protocol is about survival. Just this morning, the Guardian published an article on how the structure was being assessed for stability. Clearly the building needs to be safe for people to not only rebuild, but to ultimately use it when it is ready. The article also mentions that objects such as the Crown of Throrns have been secured and others which incurred smoke damage are being looked at by the Louvre. A massive undertaking is underway – and the work has barely even begun!

Now think about where you work and what goes on every day – solutions and their underlying applications, systems, and data power your business. Without them, things would most likely come to a grinding halt, and in a worst case scenario, make the business close their doors if they were down for a significant period of time. We live in a data-driven world. Without data, there literally is nothing. Could you process payroll without the HRIS data needed? How about processing a payment from a customer? What would happen if a record of all of your sales for the last month disappeared? I’ve seen all these scenarios (and more!) fail at companies over the years. In our data-driven world, less data is a bad thing to most companies. Data loss may happen in extreme scenarios, but the goal is to minimize or eliminate. Losing a month’s worth of data can be dealt with but you need to understand not only the technical side of the house to not lose that data, but also the non-technical aspects to devise the right solution.

I am not forgetting about people. By all means, the human aspect is important here. Make sure people are safe and sound before thinking about recovering systems. Systems also need care and feeding. They’re not going to be putting precious objects back into Notre Dame until it’s structurally sound and rebuilt, why would you bring SQL Server up if you do not have, for example, no storage or networking?

Things can change in an instant. You need to be prepared. Paris was. Whether it is an old iconic world known building and its objects, or your own systems and data, you need to account for business continuity to pick up the pieces after the proverbial dust settles. If Paris can get it right with Notre Dame, you can do it in your own environments. Contact SQLHA today to help ensure the survival of your business in the event of downtime events big or small.

No April Fools Prank – 20% Off SQLHAU’s Upcoming Online Class

By: on April 1, 2019 in SQLHAU, Training | No Comments

Can you believe it is April already? I know today will be full of people trying to be witty and pull one over your eyes. This is not that at all. Later this month on April 22 and 23, I’ll be delivering SQLHAU’s  SQL Server Availability Solutions in a Cloudy, Virtual World for the first time as an online, live class. It’s not only instuctional content but also features our famous labs.

Through Friday, April 12, there is a 20% discount available so you can grab one of the remaining seats. If you are looking to quickly get up to speed on modern SQL Server availability solutions, this is a class you can’t miss. Use the code APRILFOOLS at checkout.

A Tale Of Two Queries

By: on March 25, 2019 in Availability Groups, Transact-SQL | No Comments

I teased this blog post a little while back at SQLBits, but I’m finally getting a chance to finish this blog post. No stale, scheduled content here – it’s always hot off the presses.

Anyway, at Bits I did a few sessions, one of which was on AG and FCI troubleshooting (I’m also doing one tomorrow night at Boston SQL). As I was putting the talk together, one of the things I came up with was around endpoints for AGs. How an endpoint is fully configured is not exposed in SSMS, but you can query SQL Server’s dynamic management views (DMVs) which is easy enough if you know what you are looking for.

I did some of the initial documentatio for Microsoft a few years ago for distributed availability groups. Since its publication, Microsoft has opened it up and anyone can suggest edits/comments via GitHub. So far so good – all of this plays into the bigger story.

Flash back to last summer. I saw this on Twitter from Glenn Berry which spawned into a whole thread (and to see the whole thread, click on the date)

When I wrote that documentation (and it was before 2018, if you care about such things), I used … gasp … a WHERE clause, not JOIN ON syntax. I’ve always written queries like that going back to the 90s. The gist of that thread was that some preferred the “newer” syntax like INNER JOIN, and subsequently, had the query changed to show that. You get the same results, but whatever. I didn’t take it personally and the thread was good natured.

Before you all start screaming at me they are right (no, they were not … more on that in a minute): yes, I’m old. Discrete math was a required course for me for my Computer Science degree. It wound up being fate that things like sets, joins, etc., became the foundation for my career. I’m not ignorant. These days I’m much more of an infrastructure guy, but I do write infrastructure-related queries (hence the AG stuff).

Pedro Lopes who is now on the SQL Server dev team wrote the blog post “T-SQL Misconceptions – JOIN ON vs. WHERE” which does a good job of explaining the “it depends” around the newer style of joins. Why link this article? Microsoft deprecated the OUTER JOIN operators in SQL Server 2008. Two other sources of information on this are here and here (the first is Ward Pond’s old technet blog, and sadly will probably go away soon). If you’re keeping score at home, WHERE clauses are not deprecated except if you’re using *= and =*). The changes people made were wholly unnecessary and as the author, the newer stuff is harder to decipher than what I originally did. They were putting their own biases onto things.

As an experiment, I wrote the endpoint DMV query in two variants: my preferred way (WHERE clause) and with the JOIN ON syntax. First up is the query execution plan for the WHERE clause. Click to make it bigger.

Figure 1. Query using a WHERE clause

Next up is the INNER JOIN

Figure 2. Query using a INNER JOIN clause

Spoiler alert: they have the same execution plan. There is no difference between

Figure 3. The actual WHERE clause

and

Figure 4. The actual INNER JOIN clause

From a results standpoint, you get the same output. In other words, zero difference all around.

Preferences are just that. Preferences. Don’t foist them on others. Deal?

April Webinar Announcement: The Only Constant Is Change

By: on March 12, 2019 in SQLHA, Webinar | No Comments

As we first announced in our inaugral issue of our newsletter, Mission Critical Update, we will be holding free 30 minute webinars every other month starting in April. The first one will be on Wednesday, April 24 at 11AM Eastern/8AM Pacific/3PM GMT. We will send out details on how to join as the webinar gets closer.

Here is the information about the webinar:

The Only Constant Is Change

Whether you are at the top of the corporate ladder or an administrator, everyone has to manage change. That change comes in many forms including:

  • Applying patches to resolve issues and improve reliability as well as security updates to keep your systems and business data safe
  • Upgrading/migrating to new major versions of Windows Server, Linux, and SQL Server
  • Taking advantage of evolving hardware-based options such as hyperconverged solutions on premises and zones in the public cloud as well as new features to improve your systems and applications
  • Planning and deploying an updated architecture for the solution to make everything work in harmony

Each of these areas is a moving target. Strong organizations embrace change with well thought out plans and make the most of it with the support of management. With all of that in place, the disruptions to the business are minimized.  Want to be that type of organization? Sign up for this free 30 minute webinar from SQLHA® to hear mission critical experts Allan Hirt and Max Myrick talk about the effect of change and how to stay on top of it. You will also get a sample high level plan to see how all of this can be achieved.

Want to get notified of upcoming webinars? Don’t want to miss an issue of Mission Critical Update? Looking for the latest on training and need to know when a Mission Critical Moment is published? Subscribe today and choose what you would like to hear from us. Mission Critical Update #2 is going out later this week – don’t miss out!

SQLBits 2019 Recap, Labs, and VMware vExpert 2019

By: on March 8, 2019 in Conference, SQLbits, SQLHAU, Training | No Comments

Happy Friday. Things are finally calming down over here at SQLHA HQ. Last week I was in the UK at SQLBits 2019 where I delivered a Training Day on Thursday, February 28 and a regular session on Friday, March 1. I’ve always loved hopping across the pond to speak at Bits; it’s one of my favorite conferences. The atmosphere is always great, and the organizers do such a good job. It also doesn’t hurt that I love going to the UK. It was my first time in Manchester. Unfortunately I didn’t get much time to explore. I arrived in London on Monday night and took the train up to Manchester on Tuesday. I spent the next two days working (the inside of my hotel room was lovely), did the Training Day, my session, and headed back down to London before flying home.

The Bits venue was really lovely – it was in the old (and converted) Manchester Central train station. I’m a train guy, so it thrilled me to no end. You can see it in the picture later in this blog post. The Training Day went smoothly – I had nearly a full room. I never take it for granted that after all these years and conferences around the world, people still want to show up and hear me speak. So thank you one and all who came out to the Training Day which was not free. I know you have a lot of choice, and am honored that you selected me.

SQLBits this year made the decision to have 50 minute sessions. Anything under an hour can be … challenging, but I felt really good about my regular session “Common Troubleshooting Techniques for AGs and FCIs”. The Bits team already has the session uploaded, so feel free to view it at the link above. The audience was great, and I had a bunch of questions after. Luckily it was break time so people could ask me as I was tearing down. I also wanted to get off the stage quickly so the next presenters could get set up. I’m considerate like that!

The sign for the room

I hope to be able to go back in 2020, but if you ever get the chance, I highly recommoend attending at least one SQLBits conference in your career.

Getting home was a bit of an adventure. There was snow back home in MA, and for the first time in a long time, I did not fly direct from London. I went via JFK. My flight from JFK to Boston was cancelled, so I wound up staying the night in New York City and taking Amtrak home. It was not that big of a deal, and as someone who is on the road a lot, par for the course. These things happen. ProTip: if this happens to you, be nice to agents at the airport or on the phone. They’re having a crappy/stressful day, too, since they’re dealing with everyone else who is being affected. You get more with honey than vinegar.

Labs

The Training Day featured an all new lab that I put together, and was the first time in a few years I’ve done a Training Day with a lab. If you’re wondering why I sometimes have preconference sessions (nee Training Day in Bits-speak) without labs. I was one of the first (if not the first) to pioneer the use of labs on this scale in the SQL Server community, and it’s nice to see some others doing it now, but it’s never a slam dunk. Why?

First, before I even think of putting a lab together, I work with the conference organizers to see if the venue can support approximately up to 100 people doing labs. Since that may pepole banging on WiFi will consume a lot of bandwith, it would be a miserable experience for all involved (including yours truly) if connectivity sucked.

Second is the cost involved. There’s a cost associated with each student not only for the backend lab stuff, but also to the conference organizers for the bandwidth. If it’s not going to ultimately be too expensive, we won’t do it. Nobody is in the business of losing their shirts on these things. It’s much more economical now than the first time I did this at Bits and Summit five or six years ago.

Assuming the cost and the infrastructure is there, is it worth doing a lab? Let me say this: when others have said that labs were a lot of work and not worth it (some of whom have changed their minds since …), I did them. I’ve always believed in the power of hands on learning. I did labs for private and public classes when I had to bring (or send) an external hard drive with VMs and load them onto PCs with hours and hours of setup. That does not scale for so many reasons, not the least of which is the size of laptop needed to do my labs that way isn’t a low end spec. I think back to some of the classes I delivered in Australia where I arrived days earlier, had to load PCs up, test them, etc. Doing labs was, and still is, a big commitment.

For the past 5 or 6 (maybe 7?) years, my labs have been done through a browser. Everyone still gets their own set of virtual machines, but they no longer need to be loaded on everyone’s PC and use a hypervisor locally. Soon I’ll be offering Azure- and AWS-based labs in addition to VM-based ones for my classes and possibly precons … stay tuned!

There’s the whole instructional design part of this which is putting together the VMs (and getting them all to the right point …) as well as writing the lab manual. For one day of training, it’s hard to get a lab that works in 60 – 90 minutes knowing people are at different skill levels, but you want to do something that’s meaningful and not just “click-click-click you built something but you didn’t learn anything”. Needless to say, labs were and still are a big time investment on my end, and I feel they are worth the effort. It’s gratifying to see people loving them as they are working their way through.

Finally, with a one day class, you have to balance instructional content to put a lab in, so you need to design that part of the day even better since you’re giving up roughly 25% of your time to give that hands on experience.

I will say after all these years, it never gets old watching a ton of people all access VMs (across 100 people it’s easily 400 VMs or more) simultaneously. That’s a lot of horsepower, and also why I need to know well in advance if it’s going to happen because the folks running the backend need to ensure there’s enough horsepower reserved for the day. It’s not just “show up and run labs”.

Attendees doing labs at my SQLBits 2019 Training Day

If you want the best training and labs, sign up for one of the upcoming SQLHAU classes that are being delivered live online, or come see me in person during the Chicago dates in August. Use the code BLOG20 to get 20% off the April online class “SQL Server Availability Solutions in a Cloudy, Virtual World” which does has a lab. The discount is good through March 31. You can also subscribe to not only get our newsletter, but also get notified when we have new training or other training-related items. Sometimes we offer subscriber-only discounts 🙂

VMware vExpert 2019

I’m pleased to announce that yesterday I was re-awarded vExpert for 2019. Like people showing up for things like the SQLBits Training Day, I never assume that being renewed is automatic. Thank you, VMware!