Blog

Always On Availability Groups with No Underlying Cluster in SQL Server v.Next

By: on February 22, 2017 in Always On, Availability Groups, Linux, Pacemaker, SQL Server V.Next, Windows Server Failover Cluster | 1 Comment

UPDATED 2/22/17 in the afternoon

With a lot of the focus seemingly going to the Linux version of SQL Server v.Next (including the inclusion of Always On Availability Groups [AGs] in the recently released CTP 1.3) , I don’t think that a lot of love is being showered on the Windows side. There are a few enhancements for SQL Server availability, and this is the first of some upcoming blog posts.

A quick history lesson: through SQL Server 2016, we have three main variants of AGs:

  • “Regular” AGs (i.e. the ones deployed using an underlying Windows Server failover cluster [WSFC] requiring Active Directory [AD]; SQL Server 2012+)
  •  AGs that can be deployed without AD, but using a WSFC and certificates (SQL Server 2016+ with Windows Server 2016+)
  • Distributed AGs (SQL Server 2016+)

SQL Server v.Next (download the bits here) adds another variant which is, to a degree, a side effect of how things can be deployed in Linux: AGs with no underlying cluster. In the case of a Windows Server-based install, this means that there could be no WSFC, and for Linux, currently no Pacemaker. Go ahead – let that sink in. Clusterless AGs, which is the dream for many folks (but as expected, there’s no free lunch which I will discuss later). I’ve known about this feature since November 2016, but for obvious reasons, couldn’t say anything. Now that it’s officially in CTP 1.3, I can talk about it publicly.

Shall we have a look?

I set up two standalone Windows Server 2016 servers (vNextN1, vNextN2). Neither is connected to a domain. Figure 1 shows the info for vNextN1 (for all pictures, click to make bigger).

 

Figure 1. Standalone server not domain joined

Using Configuration Manager, I enabled the AG feature. Prior to v.Next, you could not continue at this point since there was no WSFC; you would be blocked and get an error. However, in v.Next, you will get what is seen in Figure 2. It still indicates that there is no WSFC, but it happily allows you to enable the AG feature.

Figure 2. Enabling the AG feature in v.Next

After enabling and restarting the instance, you can clearly see in Figure 3 that the AG feature is enabled. We’re not in Kansas anymore, Toto.

Figure 3. AG feature is enabled, but no WSFC

Just to prove that there is no magic, if you look in Windows Server, the underlying feature needed for a WSFC is not enabled which means a WSFC cannot be configured. This is seen in Figure 4.

Figure 4. Failover clustering feature is not installed – no WSFC!

In SQL Server, configuring this is done via T-SQL and is similar to how it is done for AD-less AGs with Workgroup WSFCs in SQL Server 2016/Windows Server 2016. In other words, you’re using certificates. In addition to certificates, there is a new clause/option (CLUSTER_TYPE) that exists in the CREATE and ALTER AVAILABILITY GROUP T-SQL. Unlike the Linux example in documentation which shows how to use the CLUSTER_TYPE syntax, I altered the syntax I’ve been using for the AD-less AGs with certificates since it is basically the same and I did not use seeding (you can if you want); I manually restored the database AGDB1. I created an AG called AGNOCLUSTER. This can be seen in Figures 5 and 6.

Figure 5. vNextN1 as the primary replica of AGNOCLUSTER

Figure 6. vNextN2 as the secondary replica of AGNOCLUSTER

To support this new functionality, there are new columns in the DMV sys.availability_groups – cluster_type and cluster_type_desc. Both can be seen in Figure 7. You will also get an entry in sys.availability_groups_cluster with this new cluster_type (also a new column there).

Figure 7. New columns in sys.availability_groups

So what are the major restrictions and gotchas?

  1. This new AG variant is NOT considered a valid high availablity or disaster recovery configuration without an underlying cluster (WSFC for Windows Server or currently Pacemaker on Linux). It is meant more for read only scenarios, which means it’s more meant for Enterprise Edition than Standard Edition. I cannot stress enough this is NOT a real HA configuration.
  2. UPDATE – A major reason this is not a real HA configuration is that there is no way to guarantee zero data loss without you first pausing the primary and ensuring that the secondary replica is in sync (or replicas, as the case may be).
  3. Having said #1, you can do a manual failover from a primary to a secondary. This would be true even in the case of an underlying server failure, hence this not being really a true availability configuration: there’s no cluster and the mechanism (sp_server_diagnostics) to detect and handle the failure.
  4. Since there’s no underlying cluster, you can’t have a Listener. This should be painfully obvious. This also means that you will connect directly to any secondary replica for reading. This makes it possibly less interesting for read only scenarios, but again, did you expect you’d get everything?
  5. Since there is no Standard Edition version of the CTP, it is unknown if this will work with Standard Edition in SQL Server v.Next. I would assume it will, but we’ll see when v.Next is released.
  6. UPDATE – This also can most likely be used for migration scenarios (arguably it will be the #1 use), which I will talk about in a Windows/Linux cross platform blog post soon.
  7. UPDATE – This is not the replacement for database mirroring (DBM). That is/was putting AGs in Standard Edition in SQL Server 2016 even though it requires a WSFC. You get so much more that you really should stop using DBM. It’s been deprecated since SQL Server 2012 and they could pull it at any time (and I’m hoping it’s gone in v.Next).

Keep in mind this is how things are now, but I don’t see much, if anything, changing whenever RTM occurs.

I won’t really be talking about this configuration this weekend at SQL Saturday Boston or at SQL Saturday Chicago in March, but I will be talking about it a little bit at both the SQL Server User Group in London on March 29th and at the SQL Server User Group in Dublin on April 4. I will be covering this configuration in more detail as part of my Training Day at SQL Bits 16 – “Modern SQL Server Availability and Storage Solutions”  (sign up for it today – seats going fast!). It will also be part of my upcoming new SQLHAU course Always On Availability Groups Boot Camp coming up in August, and incorporated into the 4-day Mission Critical SQL Server class (next scheduled delivery is in December).

Announcing Two New Courses and the 2017 SQLHAU Schedule

By: on February 17, 2017 in SQLHAU, Training | No Comments

TGIF! It’s been a busy week here, and I’m finally getting a chance to breathe. I’m proud to announce that SQLHAU has two new instructor led courses:

There is still the original 4-day Mission Critical SQL Server class which covers everything and has labs as well.

Here are the dates and locations for the classes:

Click on the links above to see pricing and discounts, or just check our Events page. We’re running some great specials (up to 30% off the list price). Don’t miss out on the best SQL Server high availability training. Reserve your seats today.

We’ve got a few more exciting things in the fire so stay tuned!

Outages In An Increasingly Connected World

By: on February 14, 2017 in Administration, Business Continuity, Data Loss, Disaster Recovery, High Availability, Mission Critical | No Comments

I’ve been in the availability game a long time. It seems like at least once a week there is a story about a fairly major outage somewhere in the world due to software, hardware, public cloud failures, human error/incompetence, DDoS, hacking, or ransomware. In the dark ages of IT, when Twitter, Facebook, or any of the other social media platforms did not exist,sometimes you heard about these events. Today, we hear about them in real time. Unfortunately, they can also become PR nightmares, too. Below is a sample of some recent events, all of which happened after January 1, 2017:

The costs associated with these problems also seems to be increasing. Computer problems are no longer just small “glitches”. There are real consequences, whether these are man made outages, systems going down for some type of hardware or software failure, or something else completely. The elephant in the proverbial room is that this is all caused by “the (public) cloud”.<insert picture of old man shaking his fist> This is not true in many cases, but let’s be clear: public cloud providers have had their missteps, too. Technology is only as good as the humans implementing it and the processes around them. Technology won’t save you from stupidity.

Let’s examine some of these very public failures.

RIP Storage

The ATO has had some high profile outages over the last year, a lot of them storage related. Their latest run in with trouble was just a few days ago. This outage didn’t have data loss associated with it, but if you look at the ATO’s statement of February 8th, it’s pretty clear they are unhappy with their vendor. They’ve even hired an additional firm to come in and get to the bottom of things. In other words: in addition to all of the inconvenience and impact to end users and taxpayers, they’re spending more money to figure out what went wrong and hopefully fix the problem.

We’re database folks. Storage is fundamental to us in three ways: capacity (i.e. having enough space), performance (helps get those results back quicker, among other things), and availability (no storage, no database). You need to know where you are in relation to all of those for your database solutions – especially the mission critical ones. You also need to have a good relationship with whoever is responsible for your storage. I’ve been involved with and heard too many horror stories around storage outages and the mess they caused, some of which could have been prevented with good communication.

Oops

Ah, GitLab. What started out as something well intentioned (putting servers up in a staging environment to test reducing load), became the perfect storm. Before I go any further, let me say up front I applaud their transparency. How many would admit this?

Trying to restore the replication process, an engineer proceeds to wipe the PostgreSQL database directory, errantly thinking they were doing so on the secondary. Unfortunately this process was executed on the primary instead. The engineer terminated the process a second or two after noticing their mistake, but at this point around 300 GB of data had already been removed.

Hoping they could restore the database the engineers involved went to look for the database backups, and asked for help on Slack. Unfortunately the process of both finding and using backups failed completely.

Read the “Broken Recovery Procedures” section of that postmortem. It is very telling. Things went nuclear and recovery was messy, but in my opinion, something that could have been avoided. Most failures of this magnitude are always based in poor processes in place – it’s something I see time and time again. Kudos to GitLab owning it and committing to fixing them, but assumptions made along the way (and you know what they say about assumptions …) helped seal their fate. This cautionary tale highlights the importance of making backups and ultimately, restores.

Wait, There Are Limitations?

A few of these tales of woe are related to not knowing the platforms used.

I can’t say whose fault it was (developers? person who purchased said product? There could be lots of culprits …), but look at Instapaper. Check out the Root Cause section of the post mortem: they went down because they hit, and then exceeded, the 2TB file size limit that existed in an older verson of their underlying platform. That is something that anyone who touched that solution should have known, but had specific monitoring in place so that when things got close, it could be mitigated. I call a bit of shenanigans on this statement, but applaud Brian taking full responsibility at the end:

Without knowledge of the pre-April 2014 file size limit, it was difficult to foresee and prevent this issue.

It’s your job to ask the right questions when you take over (hence the accountability part). Now, to be fair, it’s a legitimate gripe assuming what is said is true and RDS does not alert you. Shame on Amazon if that is the case, but situations like that are the perfect storm of being blind to a problem that is a ticking time bomb.

Similarly, Code.org suffered the same fate. I’m not a developer by nature, but even I know that 4 billion lines of code is not a lot, especially when you have a shared model. Their own webpage alludes to over 20 billion lines of code written on the platform. Their issue is that they were using a 32-bit index which had a max of 4 billion rows of coding activity, and had no idea they were hitting their limit. I would bet that some dev along the way said, “Hey, 4 billion seems like a lot. We’ll never hit that!” Part of the fix was switching to a 64-bit index which holds more (18 quintillion rows), but famous last words …

On the plus side, this new table will be able to store student coding information for millions of years.

A Very Real World Example

In the US right now, there is the chance of a major catastrophe in Oroville, CA because of the Oroville Dam. There is nothing more serious than possible loss of life and major impact on human lives. I was reading an article “Alarms raised years ago about risks of Oroville Dam’s spillways” on the San Francisco Chronicle site, and like a lot of other things, it appears that there is a chance this all could have been avoided. Of course, with things of this nature, there’s a political aspect and a bit of finger pointing (as there can be in businesses, too), but here is a quote I want to highlight

Bill Croyle, the agency’s acting director, said Monday, “This was a new, never-happened-before event.”

It only takes once. I’ve seen this time and time again at customers for nearly twenty years. Nothing is a problem … until it’s a problem. No source control and you can’t roll back an application? No change management and updates kill your deployments and you have to rebuild from bare metal? You bet I’ve seen those scenarios and customers implemented source control and change management after.

Let me be crystal clear: hundreds of thousands of people and animals being displaced is very different than losing a few documents or some data. I have no real evidence they did not do the right repairs at some point (I’m not an Oroville Dam expert, nor have I studied the reports), and yes, there is always something that you do not know that can take you down, but statements like that look bad.

Pay Now or Pay Later – Don’t Be The Headline

Outages are costly, and to fix them will take more time, money, and possibly downtime to fix. Here are five tips on how to avoid being put in these situations and giving yourself a potential resume generating event:

1. Backups are the most important task you can do as an administrator. At the end of the day, that may be all you have when your fancy platform’s features fail (for any number of reasons, including implementing them incorrectly). More important than generating backups is testing them. You do not have a good backup without a successful restore. With very large sets of data (not just SQL Server databases – data can mean much more such as files associated with metadata that is in a RDBMS), finding ways to restore is not trivial due to the costs (storage, time, etc.). However, when you are down and possibly closed for good, was it worth it the risk only to find out you have nothing? No.

2. Technical debt will kill you. Having systems that are long out of support (and probably on life support) and/or on old hardware is a recipe for disaster. It is definitely a matter of when, not if, they will fail. Old hardware will malfunction. If you’re going to have to search eBay for parts to resuscitate a server, you’re doing it wrong. Similarly, for the most mission critical systems, planned obsolescence every few years (usually 3 – 5 in most companies) needs to be in the roadmap.

3. Have the right processes in place. Do things like test disaster recovery plans. How do you know your plans work if you do not run them? I have a lot to say here but that’s a topic for another day.

4. Assess risk and mitgitate as best as possible. Don’t bury your head in the sand and act surprised when something happens. (“Golly gee, we’ve never seen this before!”)

5. Making decision on bad information and advice will hurt you down the road. I can’t tell you how many times Max and I have come in after the fact and clean up somebody else’s mess. I don’t relish that or get any glee from it. My goal is to just fix the problem, but if you have the wrong architecture or technology/feature implemented, it will cause you years of pain and a lot of money.

Are you struggling with any of the above? Have you had or do you currently have availability issues? Contact us today so SQLHA can help you avoid becoming the next headline.

 

I’m a VMware vExpert and Schedule Updates

By: on February 10, 2017 in SQLbits, VExpert, Vmware | No Comments

It’s been a crazy (but good) start to 2017. Besides the foot of snow that got dumped on the Boston area yesterday, I’ve been busy with client engagments, writing the book and other things (those other things you should hopefully see soon), and took a bit of a break by getting away for a few days (the first time I took more than a day or two off in about two years). My batteries are recharged (something I talk about in this blog post from 2015), and to add to an already exciting year, I was named a VMware vExpert for 2017. I’m extremely humbled and honored. Thanks to VMware for recognizing me. I don’t just speak Hyper-V, you know 🙂 This is a nice bookend to my already dual MVP status with Microsoft.

I also have finally found some time to update our Events page with everything I have coming up – which is a lot! Max already put his stuff up, so I’m the slacker here. Besides SQL Saturdays in both Boston and Chicago, I’ll be back at SQLBits doing another (and all new …) Training Day as well as a session, and heading to Ireland for the first time to speak at the SQL Server Ireland User Group. I’m also eying something in London the week before (the last week of March) if we can make the stars align, so stay tuned.

We’ve been working on the training schedule for 2017 USA classes and hope to annoiunce things in the next few weeks – we just need to nail down one or two things before announcing anything.

Hope to see you at one of these upcoming speaking engagements!

2016 in Review, A Few Updates, and Looking Forward

By: on December 30, 2016 in Book, Classroom, PASS Summit, SQL Server 2008, SQL Server 2008 R2, SQL Server 2016, SQL Server V.Next, Teaching, Virtualization, Vmware, Whitepaper, Windows Server 2008, Windows Server 2008 R2, Windows Server 2016 | 7 Comments

Hello, everyone. Can you believe 2017 will be here in just a few days? 2016 seemed to fly by. It’s been quite a year. I wanted to take the time in my final blog post of the year to recap some of what has happened, talk about some stuff that is coming, and update you on some stuff, too.

The Elephant in the Room

There is the matter of the book. As many of you are (as well as I am) painfully aware, I officially announced in 2013. A lot has happened both personally and professionally between then and now. One thing I haven’t talked about but came to a head in late 2015 was my health. Without getting into specific details, from January of 2014 until almost the end of 2015, I was not in good shape. I hurt myself and over time, it got to the point where the pain was so bad I basically couldn’t sit, stand, or lay down without pain. I am a non-“medicate yourself” person, so I didn’t take anything for it. I tried to soldier through it. That was a huge mistake.

I can tell you that the amount of time I spend on the road speaking and going to customers didn’t help, either. There was just no time to stop and get off the road due to work commitments. Schlepping luggage, airplane seats, and everything else exacerbated what was going on. I was shattered by the time I hit hotel rooms. It didn’t help that my hand was also messed up as well which not only affected my ability to type but also my ability to play bass. Since I couldn’t stop using my dominant hand, that took about seven months to heal.

I did my best to put a brave face on things publicly, but if you saw me at PASS Summit 2015, you would have seen me at my low point in terms of how I was feeling. It was hard to miss the Leaning Tower of Allan. Right after PASS Summit, I went back to the doctor and spent the rest of the year and the first few months of 2016 in physical therapy. Luckily for the first time in nearly two years I had contiguous time I could deal with what was going on with me. Knock on proverbial wood, physical therapy took care of things and to date, I feel great. I haven’t felt this healthy in years. If you saw me at PASS Summit this year, it was clearly night and day.

What does that have to do with the book? Needless to say, as time dragged on, it became harder and harder for me to get my normal work done, let alone sit for hours on top of that writing. I’m not looking for a pity party or sympathy, nor am I absolving myself of anything book related, but anyone who has experienced excruciating pain to the point of it being debilitating knows what I am talking about. I’m not active/active, you know. All kidding aside, don’t be a martyr like me if you’re feeling bad: take care of it. I let things go to the point where I had no choice and if physical therapy did not work, surgery may have been something I needed to explore. Quality of life became a very real issue. To put a capper on 2016, I’m just getting over bronchitis which has sidelined me for a good portion of December.

So where does that leave things? I’m back on track despite the bronchitis. My spare time over the past few months (which hasn’t been much – we’ve been slammed with the day job which isn’t 9 to 5 …) has been spent working on the book and should really, truly be content complete over the next little while barring any unforeseen problems.

Before anyone asks, the book will still be covering SQL Server 2008 R2 especially because Microsoft recently announced that they are (unfortunately) now giving the super duper paid extended support option (Premium Assurance – see this and this) which gives up to 16 years of paid support on that as well as Windows Server 2008 R2. It makes what I’m doing more vital than ever since I’m crossing all the major versions of SQL Server and Windows Server. I’ve also made some other hard choices as to what will/will not be in the book:

  • SQL Server 2016 and Windows Server 2016 are now in scope
  • Yes, there will be public cloud-related content – not just on premises/physical stuff
  • SQL Server v.Next – including SQL Server on Linux – is not in scope. This will be part of the first major update to the book, timeframe TBD since we have no release date for v.Next. If you haven’t been paying attention, the paint is barely dry on SQL Server 2016 and we already have CTPs of v.Next (as of the writing of this blog, we’re up to 1.1).

I’ll wrap up this section with this thought: I never intended for things to go this way. I not only thought I’d be done, but I’d be on the updates by now. The road of good intention wound up being full of potholes which bent my rims and threw my car out of alignment. As I have mentioned before, my 2005 book which was much smaller in scope and size, took 3 years. I’m not happy about the circumstances, but there is light at the end of the proverbial tunnel. They say what does not kill you makes you stronger, right? Again, I’ll reiterate – don’t let health issues build up. Take care of yourself.

Let’s Talk PASS Summit 2016

Once I took care of my health, 2016 really took a big upswing. Besides all of the customer work we did which hasn’t slowed down, some of the highlights of the year included teaching in Australia, precons at SQL Nexus in Copenhagen and SQLBits in the UK, speaking at VMworld in Las Vegas, and the capper of them all: PASS Summit 2016.

I’m very fortunate that I have presented at PASS Summit most years and have had a preconference session for quite a few of the ones in recent memory. I was the first to introduce live labs three years ago and try to push the envelope each time my abstracts are accepted. I am glad PASS took a chance on me then – we had no idea how it would play out – and three years later, it keeps getting bigger and better. By now, we have a lot of the logistics stuff down pat since I keep getting selected. Let me be clear – I don’t assume I’ll get a precon because there are no guarantees. I know I’m lucky.

As with the previous years, we talked about capping the number who could sign up at around 100. There are a few reasons for this, not the least of which is making sure that I have enough proctors (i.e. things are manageable) and the convention center can handle the bandwidth needed. Much to my surprise, at one point when I checked in they had sold over 100 seats – PASS forgot to cap it. We then decided to cap it around 110. People still wanted to sign up, so we upped it to 120. Finally we said the heck with it, and filled the room (capacity: 136).

An additional 36 people does not sound like a lot of people, but from a backend and management perspective, it is. I’ve never done anything that big with labs. I had to talk to the folks hosting the VMs since everyone gets their own set (i.e. hundreds of VMs – not a small backend to have to account for), we conferred with the convention center, and on it went. I had so many people come up to me before and during PASS Summit telling me they wish they could have gotten in – as far as I know, I think I was the only sold out one on Monday (can’t speak for Tuesday). No pressure, right? I remember one encounter going down the elevator at the Hyatt heading over the morning of the precon. One of the conference attendees saw my badge and made the association. He mentioned how he wanted to get in but couldn’t. I mean, what can you say? I’m flattered and humbled by that demand. I never take any of this for granted, and would give the same energy if one person showed up or that sold out room of 136.

Things went off with only minor issues (power, which we took care of in a break; same issue as last year), and it was awesome to see that many people doing labs at once. I snapped this picture during the day.

Figure 1. 136 people doing labs. Glorious!

I also had a half day session on what was new for availability in both SQL Server 2016 and Windows Server 2016. It was an expanded version of a talk I had been doing for over a year, aided by the fact that Windows Server 2016 had just been released so I could demo things I couldn’t before. I had no idea what room I was in, and thought there may be some interest, but there were many good sessions at the same time. Much to my surprise – on the last day of PASS Summit no less – I was in a huge room (400+), and nearly every seat was filled for most of it. Below is a picture of one side as the room was starting to fill. I’d have to look at years past, but it may have been one of the biggest rooms I’ve talked in, and definitely one of the most full. Again, no pressure – just hundreds of people who can skewer you if you suck. Luckily that didn’t happen. I had my best scores ever for a PASS Summit for both the half day and the precon. No complaints, and I can’t say enough how good of an event PASS Summit was – and not just for me.

Figure 2. Room filling up for my half day session

 

VMware Whitepapers

In addition to contributing to and reviewing this whitepaper from VMware, I wrote one entitled “Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere” that was published in November. I didn’t blog about it, and some of you may have missed it. It was my first real return to writing released publicly in a long time. It’s not marketing fluff, and hope you find it useful.

Dual Microsoft MVP

One of the things I am very proud of is that back in July, I was not only re-awarded as a Data Center & Cloud Management (aka Windows Server nee Cluster) MVP, but I was also awarded as a Data Platform (aka SQL Server) MVP. There are other Dual Microsoft MVPs, but it’s nice to be recognized for the two things I do day in and day out.

Selected 2016 Numbers

0 – The number of laptops purchased by me in 2016. Yes, I’m still using the Vaio Z Canvas I got from Japan in June of 2015. More than 18 months is uncharted territory!

3 – The number of noise cancelling headphones employed by me this year. I got two new pair of headphones for travel at the end of the year. We’ll see which one stays. The old pair I used has already not made it back. One of these days I’ll do a blog post on what to consider for noise cancelling headphones.

4 – The number of bottles of Goober Grape I polished off this year.

5 – The number of countries I visited. Besides the USA, I was in these countries: Australia, Bulgaria, China, Denmark, and the UK.

45 – The number of flights I was on and how old I turned this fall.

100,000 – The approximate number of miles I spent in the air this year.

Too many – the number of (in)famous and influential people (not just entertainment folks) who died.

Training, Events, and Public Speaking in 2017

2017 is already shaping up to be a busy year. I’ll be back in the UK for two weeks at the end of March and early April teaching my 4-day Mission Critical SQL Server class in London via Technitrain and then it’s SQLBits 2017, where I’ll also be delivering a Training Day on April 6th.  I will also most likely be speaking at the London SQL Server User Group again the week I’m teaching my class. Register early – both the class and the Training Day are likely to be sold out!

I’ve submitted for a few SQL Saturdays (I usually do about a half dozen a year, give or take), and as those are confirmed, those will be added to the schedule. A few User Groups have approached me, so I’m trying to slot those in as well. Get your requests in early! I’m hope to speak again at VMworld (fingers crossed), and will of course, submit to PASS Summit again.

I’m still working on additional public dates (besides London) for my classes, and should have them nailed down in the next few weeks. Once that’s done, they’ll be posted and we’ll run a special. Stay tuned! In the meantime, if you need some training and would prefer us to come onsite, don’t hesitate to reach out. Get on our schedule early before it’s filled up.

A Note of Thanks

Whether you attended one of my classes or preconference sessions, saw me speak in person or online, read some of my writings, or more sometime this year – including reaching out to me asking where the book is even if you’re annoyed at me – thank you. I do not take anyone for granted and without you, none of what I do is really possible.

I would be remiss if I didn’t give a special thanks to my longtime friend and business partner, Max Myrick. 2016 was SQLHA’s best year yet.

Whew!

I only covered some of what happened. 2016 was a quite the year personally and professionally, and 2017 is looking better. I hope to see many of you next year, be it in person or online. Happy New Year!