Cropped shot of coworkers using sticky notes on a glass wall during a meeting

Executive Summary: WHAT SHOULD YOU LOOK FOR IN A DISASTER RECOVERY SOLUTION?

This is the second post in a three part Business Continuity blog series. In this article we take a look at planning considerations for creating a Disaster Recovery (DR) strategy, and what you should look for in a ‘best-practice’ DR solution.

By utilizing existing storage solutions you already own, Peer Global File Service (PeerGFS) monitors file systems from multiple vendors in real-time to create a highly-available active-active DR solution while helping to prevent ransomware from spreading, whilst also creating an off-site backup.

To find out more or request a trial copy, click one of the buttons below.

More About PeerGFS
Download Request

Part 2: So, what should you look for in a Disaster Recovery solution?

What is considered best practice?

How can public cloud fit into this strategy?

Do you have to, or should you use public cloud as part of the strategy?

Do you have a Disaster Recovery Plan?

A good place to start when designing a good disaster recovery plan, is to consider the following:

  • The 3-2-1 Backup Principle
    This states that you should have at least three copies of your data, on at least two different storage media, and at least one off-site copy. 3-2-1.
  • The Recovery Time Objective, or RTO
    How long it will take from the point of disaster to recovery. Will it be minutes, hours, days? How long will it take to get back on your feet? How much downtime can your business afford?
  • The Recovery Point Objective, or RPO
    If disaster strikes, what point in time can you restore to? How much data would be lost since the last backup or snapshot was taken?
  • How to protect against ransomware.
    What happens if ransomware makes it past your defences and get’s the wrong side of your corporate firewall? You’re not expecting your antivirus to be 100% infallible all the time, are you?

Let’s use a hypothetical situation as an example. Dennis is the IT Manager at a company that has a data centre in Hamburg and another in Munich, which are connected together via a VPN. Let’s apply the 3-2-1 backup principle and see how he can use Peer Global File Service (PeerGFS) software to ensure that the RTO and RPO are kept to a minimum and at the same time, how he can help prevent the spread of ransomware and REALLY protect his company’s data and keep production maximised.

Remember, the 3-2-1 backup principle says that you should have three copies of the data, on two different storage media and at least one off-site backup.

He has two sites, so he could add cloud storage into the mix for a hybrid approach that would provide somewhere to store the third copy of the data. In the data centre in Hamburg, there is a SAN storage solution that’s used by his Windows servers. In Munich, let’s say that Dennis deployed Nutanix for their virtualised workloads and storage. So, that’s two different storage media taken care of, as well as two out of the three copies of the data. The third copy of the data could be housed in Azure Blob, or an AWS S3 bucket, which of course would be off site.

As PeerGFS synchronises across Windows, NetApp, Dell EMC Isilon, VNX and Unity storage in addition to Nutanix Files, Dennis might have chosen to save on the licence cost of a backup NetApp, Dell EMC or Nutanix Files storage, by simply synchronising files with a cloud-hosted file server.

But what if Dennis was unsure about the future costs of storing data in the cloud? Also, this data is in Germany, and there are some pretty stringent rules about data sovereignty. So, he might not feel comfortable putting a copy of his data in public cloud, and would prefer to use a co-location company in Frankfurt. They can provide an S3-compatible storage solution at a fraction of the cost of public cloud, and that’s at a fixed and predictable price per terabyte.

Either way, the PeerGFS software can provide the distributed files fabric to stretch across each location, and synchronise files between each, for a Continuous Data Protection solution that would keep the files at each location up to date, as files are created and updated. It would prevent file version conflicts by including real-time distributed file locking as standard. Because PeerGFS can react to file-level changes in real-time, the Recovery Point Objective will be very attractive. It’s better to copy a file to the other side as soon as it’s closed, than having a backup or snapshot scheduled to run every x number of minutes or hours.

But what about the Recovery Time Objective?

By integrating with a DFS namespace, if a Hamburg file server went offline, the users would be automatically redirected to the copy of the data at the Munich data centre. They would just keep on working and stay productive. When the Hamburg file server came back online, PeerGFS would resynchronise it, and then have the users redirected back again so that they’re working locally once more.

The continuous data protection model provided by PeerGFS, not only keeps the RTO and RPO to a minimum, it also provides enterprise-grade redundancy for real-world high availability to meet business continuity requirements.

Download Request

About the author

Spencer Allingham Headshot
Spencer Allingham
Presales Engineer at | + posts

A thirty-year veteran within the IT industry, Spencer has progressed from technical support and e-commerce development through IT systems management and for ten years, technical pre-sales engineering. Focussing much of that time on the performance and utilisation of enterprise storage, Spencer has spoken on these topics at VMworld, European VMUGS and TechUG conferences, as well as at Gartner conferences.

At Peer Software, Spencer assists customers with deployment and configuration of PeerGFS, Peer’s Global File Service for multi-site, multi-platform file synchronisation.