US +1 703 763 7700 | EMEA +49 89 262 076 322 | UK +44 20 77709822

Search
Close this search box.

Choosing the Right Disaster Recovery: Cloud-First Strategy

Blog Featured Image Server Room Problems

Executive Summary: Disaster Recovery Solutions by Peer Software

This post introduces part one of a three-part Business Continuity blog series. In this first post, we look at whether a cloud-first strategy is sensible for Disaster Recovery (DR), what you should look for in a ‘best-practice’ DR solution, and how a good DR strategy can prevent disasters from ransomware.

Using the existing storage solutions you already own, the Peer Global File Service (PeerGFS) software monitors multiple vendor file systems in real-time to create a highly-available active-active DR solution. PeerGFS intelligently helps prevent ransomware from spreading and creating an off-site backup.

Find out more or request a trial below

Should Disaster Recovery Be Cloud-Based?

Public Cloud vs Data Center Workloads

There are some excellent backup solutions that leverage the scalability and availability of the public cloud. But are they the correct solution as part of an enterprise disaster recovery solution? Let’s look at workload considerations to help determine what works well in the cloud and what works better in a data center.

Cloud-Based vs Data Center Workloads

Workloads ideal for public cloud tend to be more ‘bursty’ in nature, which is great for an elastic compute model that can scale up as well as down, such as:

  • DevOps
  • Variable workloads, ie. seasonal retail that would need more computing to be spun up in the lead up to the holiday season
  • Compute intensive workloads, such as analytics/machine learning

To maintain the hardware in a data center all year round that could cope with the busier, more compute-heavy times, doesn’t make economic sense. The cost to power the data center, maintain it, insure it, and so on isn’t worth it when other options are available.

But here are some workloads that often aren’t suitable for public cloud. Examples are:

  • Primary backups, because of restore times for large data sets over an internet connection, and possibly the cost of egress back to the data center
  • High-performance applications that constantly demand a lot of disk I/O and network throughput

Some workloads need to be kept running all the time, while others are more sporadic. In these instances, there are temporary options that can make this work more affordable. Because the public cloud allows anyone to rent a virtual machine (VM) and associated resources by the hour, you only pay for the number of hours consumed and then shut down or tear down the infrastructure when it’s no longer needed. More variable workloads are very attractive candidates for public cloud solutions.

Whereas, if you rented a VM by the hour and ran it ragged 24 hours a day, 7 days a week, there will come a point when it probably becomes cheaper to run that workload on your own hardware in your own data center.

It’s similar to the company car that I don’t have. Let me explain. My workday often consists of sitting in front of a keyboard, mashing my palms into the keys, and hoping something legible comes out as a result. I have Teams or Zoom meetings with my colleagues, customers, prospective customers, and technology partners, but for the most part, I don’t need to travel ordinarily.

Except when I do! And that’s when I would hire a nice car to drive to a meeting or a conference. If I were on the road several times a week, then a company car would make economic sense, but as I don’t, it works out cheaper each year to hire one when I need it and then give it back again. And, of course, I don’t need to worry about maintenance, car tax, depreciation, and so on.

It’s worth keeping in mind that like the car rental companies, nobody is running a cloud business as a charity, and of course there’s some margin in there somewhere. I just need to weigh up which option is more cost-effective for me, or my workloads.

Workload Considerations for Disaster Recovery

I recently watched a YouTube video on public cloud repatriation, discussing this topic, and why some companies are bringing their workloads back from the public cloud back into their data centers. I was very impressed by Bobby Allen, whose LinkedIn profile describes him as a “Cloud Therapist at Google – living at the intersection of cloud computing & sustainability.”

After wondering for a moment if he was given that title by the Senior Vice Architect of Made-Up Job Names, I realized that he was actually asking some very pertinent questions:

  • “As the expense of running a workload in public cloud increases, does the value of the workload increase? If not, it probably shouldn’t be in public cloud.”
  •  “Should an application that doesn’t have unlimited value be put in a place that has unlimited scale and spend?”

Take a moment to consider that second one. That sounds like a pretty wise question to ask when considering a workload for public cloud, if you ask me.

He further stated:


“When you run a virtualisation solution on-prem, you have a finite set of functionality, under a cost ceiling. When you move workloads to the cloud, they can use or consume so many more services, spin up and connect to so many more things, and generally have the ability to scale up quickly, and the unknown and unpredictable cost of that can make many IT Director’s fearful of making the switch to public cloud.”

That makes total sense! They would no longer have that security blanket of a cost ceiling.

No one wants to suddenly be hit by an unexpected bill because they haven’t accounted for something, or there has been a mission creep that turned out to be expensive when the bill comes in. It’s an uncomfortable conversation for someone to have with the business stakeholders.

Workloads Suitable for the Cloud or Data Center

So, given that some workload types are suitable for public cloud hosting, and some are definitely not. What about backing up to a cloud-hosted VM?

In my opinion, it can form part of a disaster recovery solution, especially as part of a mitigation strategy for cyber attacks such as ransomware, but perhaps not as your ONLY or primary backup. Here’s why:

When designing a disaster recovery strategy, an important factor is the Recovery Time Objective, or RTO. Or put another way, how long it will take following a disaster to get everything up and running normal again. Of course, if you need to recover a single file or a few files and folders over an Internet connection, that’s probably realistic and an acceptable RTO.

But what if you had to restore a larger amount of data over that Internet connection?

What if an entire file server hosting multiple terabytes of data needed restoring, or heaven forbid, a complete data center’s worth of data? The RTO would be astronomical and not at all realistic. It could take weeks or even months to restore, and the organization should see that as a business risk that rules it out.

There are solutions on the market that can redirect a user trying to access a locally corrupted or infected file to the non-corrupted ‘good’ version in the public cloud, without having to copy it back on-premises.  That sounds like a good solution, but the degree of vendor lock-in you would be subjected to, would put many people off. If you decided to stop using their gateway to the cloud, you wouldn’t have access to your cloud-hosted files. You have business continuity, but how long is it still going to take to repair your on-site file server?

Find the Best Disaster Recovery Solution for Your Enterprise Needs

Peer Global File Service (PeerGFS) software monitors multiple vendor file systems in real-time to create a highly-available active-active DR solution. While, intelligently helping prevent ransomware from spreading, and while also creating an off-site backup. If you want to find the DR solution for your organizations’ unique needs, request a trial of PeerGFS to learn more.

Spencer Allingham Profile Picture
Spencer Allingham
Senior Solution Architect at Peer Software | + posts

A thirty-year veteran within the IT industry, Spencer has progressed from technical support and e-commerce development through IT systems management, and for ten years, technical pre-sales engineering. Focusing much of that time on the performance and utilization of enterprise storage, Spencer has spoken on these topics at VMworld, European VMUGS and TechUG conferences, as well as at Gartner conferences.

At Peer Software, Spencer assists customers with deployment and configuration of PeerGFS, Peer’s Global File Service for multi-site, multi-platform file synchronization.

Share This Story, Choose Your Platform!

Related Posts