US +1 703 763 7700 | EMEA +49 89 262 076 322 | UK +44 20 77709822

Search
Close this search box.

Essential Factors for Choosing a Disaster Recovery Solution

Cropped shot of coworkers using sticky notes on a glass wall during a meeting

Executive Summary: Backup and Disaster Recovery Best Practices by Peer Software

This is the second post in a three-part Business Continuity blog series. In this article, we look at planning considerations for creating an IT Disaster Recovery Plan (DRP) and what you should look for in a ‘best-practice’ DRP.

By utilizing existing storage solutions you already own, Peer Global File Service (PeerGFS) monitors file systems from multiple vendors in real-time to create a highly-available active-active DR solution while helping to prevent ransomware from spreading while also creating an off-site backup.

Find out more or request a trial below

What to consider in a data disaster recovery solution

What is considered best practice?

How can a public cloud fit into this strategy?

Do you have to, or should you use a public cloud as part of the strategy?

Do you have a Disaster Recovery Plan?

An excellent place to start when designing a good disaster recovery plan is to consider the following:

The 3-2-1 Backup Principle

This states that you should have at least three copies of your data, on at least two different storage media, and at least one off-site copy. 3-2-1.

The Recovery Time Objective, or RTO

How long it will take from the point of disaster to recovery. Will it be minutes, hours, days? How long will it take to get back on your feet? How much downtime can your business afford?

The Recovery Point Objective, or RPO

If disaster strikes, what point in time can you restore to? How much data would be lost since the last backup or snapshot was taken?

How to protect data against ransomware.

What happens if ransomware makes it past your defenses and gets to the wrong side of your corporate firewall? You’re not expecting your antivirus to be 100% infallible all the time, are you?

Let’s use a hypothetical situation as an example. Dennis is the IT Manager at a company with a data center in Hamburg and another in Munich, which is connected via a VPN. Let’s apply the 3-2-1 backup principle and see how he can use Peer Global File Service (PeerGFS) software to ensure that the RTO and RPO are kept to a minimum, and at the same time, how he can help prevent the spread of ransomware and protect his company’s data and while keeping production maximized.


Remember, the 3-2-1 backup principle says that you should have three copies of the data, on two different storage media and at least one off-site backup.

Dennis has two sites, so he could add cloud storage into the mix for a hybrid approach that would provide somewhere to store the third copy of the data. In the data center in Hamburg, there is a SAN storage solution that’s used by his Windows servers. In Munich, let’s say that Dennis deployed Nutanix for their virtualized workloads and storage. That’s two different storage media taken care of, as well as two out of the three copies of the data. The third copy of the data could be housed in Azure Blob or an AWS S3 bucket, which of course would be off-site.

As PeerGFS synchronizes across Windows, NetApp, Dell EMC Isilon, VNX, and Unity storage in addition to Nutanix Files, Dennis might have chosen to save on the license cost of a backup NetApp, Dell EMC, or Nutanix Files storage by simply synchronizing files with a cloud-hosted file server.

But what if Dennis was unsure about the future costs of storing data in the cloud? Also, this data is in Germany, and there are some pretty stringent rules about data sovereignty. So, he might not feel comfortable putting a copy of his data in a public cloud and would prefer to use a co-location company in Frankfurt. They can provide an S3-compatible storage solution at a fraction of the public cloud cost, and that’s at a fixed and predictable price per terabyte.

Either way, the PeerGFS software can provide the distributed files fabric to stretch across each location and synchronize files between each for a Continuous Data Protection solution to keep the files at each site up to date as files are created and updated. It would prevent file version conflicts by including real-time distributed file locking as standard. Because PeerGFS can react to file-level changes in real-time, the Recovery Point Objective will be exquisite. It’s better to copy a file to the other side as soon as it’s closed than to have a backup or snapshot scheduled to run every x number of minutes or hours.

Recovery Time Objective With PeerGFS

By integrating with a DFS namespace, the users would be automatically redirected to the copy of the data at the Munich data center if a Hamburg file server went offline. During this outage, users can continue working and staying productive. When the Hamburg file server comes back online, PeerGFS would resynchronize it and then have the users redirected back again so that they’re working locally once more.

The continuous data protection model provided by PeerGFS not only keeps the RTO and RPO to a minimum, it also offers enterprise-grade redundancy for real-world high availability to meet business continuity requirements. Contact us to learn more!

Spencer Allingham Profile Picture
Spencer Allingham
Senior Solution Architect at Peer Software | + posts

A thirty-year veteran within the IT industry, Spencer has progressed from technical support and e-commerce development through IT systems management, and for ten years, technical pre-sales engineering. Focusing much of that time on the performance and utilization of enterprise storage, Spencer has spoken on these topics at VMworld, European VMUGS and TechUG conferences, as well as at Gartner conferences.

At Peer Software, Spencer assists customers with deployment and configuration of PeerGFS, Peer’s Global File Service for multi-site, multi-platform file synchronization.

Share This Story, Choose Your Platform!

Related Posts