Meme Monday, started by Thomas LaRock (Twitter | Blog), is an interesting and fun thing. This month, the topic is dumb SQL Server questions. There are a ton, but this week I’m going to focus on three I hear all the time.

Question 1: My Windows/backup guys make a full backup of the system with SQL Server. Isn’t that the same as making a SQL Server backup of the databases?
In most cases, the answer is no. SQL Server is NOT a flat file-based database. While a Windows-level backup of a SQL Server database can be done properly if it interfaces with VDI/VSS, chances are most backup programs won’t handle that or may skip the SQL files since they are in use. Plus, at that point, you would most likely have to attach files – it’s probably not a backup you could use in the traditional RESTORE sense or use transaction log backups with. Some software may do proper SQL Server backups, but you have to ask. Some hardware-based solutions do properly interface with SQL Server, so while out of control of the DBA, can make valid backups that are logged in SQL Server. You would need to check with your vendor.

At the end of the day, as a DBA, you’re responsible for your SLAs, RTOs, and RPOs. A big part of that is ensuring the right backup strategy. Leaving it to your Windows or backup guys could be a potenital excellent career limiting move.

Question 2: Isn’t SQL Server failvoer clustering the same as Oracle RAC?
No, no, a thousand times no. While maybe not a dumb question, it’s the one I hear the most in clustering discussions/talks and it’s clear people don’t do their homework and are making some pretty big assumptions. SQL Server’s failover clustering is built on top of Windows failover clustering. A single server (node) can own a given clustered instance’s resources at a single time. Shared disk is the single point of failure. Scale up is how you would get scalability.

RAC allows you to scale processing, but you still have one copy of the data on disk – single point of failure, and if you have an underpowered storage solution, I don’t care how many processing servers you have – you’re not going to scale if your I/O sucks. Plus, while I know it works, the lock manager Oracle has coded is a complex piece of technology.

Both solutions are not going to necessarily be straightforward to implement. Microsoft has made clustering a lot easier over the years.

At the end of the day – both are valid approaches and are proven technologies.

Question 3: Can’t SQL Server have zero downtime and no data loss?
To achieve zero downtime, if you never patch SQL Server or any other components (including Windows) and never have anything unexpected happen, sure – you can have no interruption in service. Outside of that, no way. Someone recently tweeted they need a maximum of 8 seconds of downtime. That is darn near, if not, impossible. You have to be realistic about your tolerances and what is actually achievable. With the best processes and technologies, you’re looking at anywhere from 30 seconds to a few minutes in a best case scenario.

As for data loss, it’s going to be an “it depends”. If you’re using some kind of synchronous method (i.e. high safety database mirroring), you won’t have data loss for that database since it must be on the mirror as well. But there are complexities – how in your application can you tell where you are at data wise? What happens if your application uses multiple databases – how will you keep them in sync and then tell where they are at data wise?

Failover clustering will always be consistent to the point of failover because the stop and the start means that anything that is incomplete won’t be written will be rolled back due to the recovery process, but you need to account for figuring this out in your application.