All Blog Entries

General News


Would you let your employees take an extra 30 minutes for lunch each day?

Avoiding small losses in daily productivity can improve your bottom line in a big way

Yes, the subject line of this post seems absurd. What respectable project manager would let their employees and contractors consume valuable time that should be used more productively than on an extra long lunch break every day?

Sure, everyone understands the above example, but why do so many people ignore other productivity wasters that can be found hiding in plain sight in nearly every organization?lunchbreak

A recent whitepaper by CAD industry analyst Robert Green took issue with some common productivity challenges related to managing shared files in multi-site architecture, engineering and construction firms. Robert analyzed a company with four offices, 100 CAD users and four CAD coordinators that annually loses over $100,000 in productivity related to managing shared project files over a WAN connection. It is a very eye-opening analysis.

When comparable organizations are confronted with this scenario their reactions vary from committing to finding a solution, to attempting to rationalize the problem away. Some favorites include:

  • “Those aren’t actual losses as the employees were already paid for.”
  • “That has been going on for a long time and it has become accepted.”
  • “We tried to fix it before and failed, and decided it is cheaper to do nothing.”

If that’s the case, why not let your employees take an extra 30 minutes for lunch?

The answer is simply that you can do something to manage how long your employees take for lunch, and to improve how efficiently your employees can share and collaborate with their project files. The best part is your people will thank you for helping improve the quality of their work life, even without a longer lunch break.

You can read the full analysis and story in The Ultimate Guide to CAD File Collaboration by Robert Green. You can also get a copy of the guide and much more in our CAD File Collaboration solutions page complete with videos, customer success stories and fully functional trial software.

Posted in All Blog Entries, File Collaboration


Cracking the Code on Enabling File Collaboration in a Hybrid Cloud Environment

NetApp and Peer Team to Deliver a World-class Solution for File Collaboration

Hybrid cloud storage systems that integrate on-premises data storage with cloud-based storage are rapidly becoming the strategy of choice by IT organizations around the world.

As these organizations plan and implement a hybrid cloud architecture, they will likely encounter a number of challenges related to supporting file sharing and collaboration requirements for distributed project teams including:NetAppBlog

  • Providing fast access to shared files
  • Maintaining accurate file versioning
  • Synchronizing shared file replicas across multiple locations and data centers
  • Minimizing impacts on WAN resources
  • Meeting security and governance objectives

These challenges are magnified by the addition of cloud-based storage resources that do not share a common file system with existing on-premises storage platforms. This alone has made the implementation of a cost-effective high-performance file collaboration solution a nearly impossible task in a hybrid cloud environment.

Recently, NetApp and Peer Software teamed up to meet these challenges by introducing a hybrid cloud file collaboration solution that features PeerLink® file collaboration software that can be integrated with NetApp Clustered Data ONTAP and 7-Mode on-premises storage systems, NetApp Private Storage, a near-cloud storage solution, and Cloud ONTAP, a virtual NetApp storage system for Amazon Web Services.

To learn more about this world-class solution please read File Collaboration with NetApp and Peer Software in the Hybrid Cloud, a NetApp Community blog article authored by Clemens Siebler, a Cloud Platform Architect at NetApp.

Posted in All Blog Entries, File Collaboration


What Happened to the Definition of Continuous in Continuous Data Protection?

Continuous Data Protection solutions should feature both continuous protection and availability, especially for branch office environments

In the IT universe, continuous data protection (CDP), sometimes also called real-time backup, refers to the ability to automatically detect file changes made on a source system and automatically replicate (back up) these changes to a centralized target system each time they occur.

Traditional backup solutions differ from CDP solutions as they only create a backup copy at the time a backup was made or scheduled. Backup schedules are typically determined by backup windows – time slots that have the least amount of impact on day-to-day operations, which naturally places a certain amount of data at risk between backups. Continuous data protection solves this age old problem because it has no backup schedule, it is always on.NoRestoreProcess

Most backup solutions are adequate in a data center environment, or in a single office location where backups are made over a high speed local area network (LAN), but when it comes to a branch office environment connected by a WAN, these same solutions will struggle. This is caused by the following challenges:

  • Copying files and file changes creates overhead that saturates WAN connections and impacts operations in branch offices
  • Backup functionality is unreliable over below average WAN connections with high latency and low throughput
  • Delayed recovery and restore process – users at an affected branch office will have to wait for a restore process to complete before they can resume working on their files

One solution that solves the challenge of continuous data protection and availability for branch office environments is PeerSync® Backup Edition for Servers by Peer Software. Powered by DFSR+® technology, PeerSync is a proven CDP solution for Windows servers and NetApp NAS storage systems located in branch offices.

Key features include:

  • Real-time sync for fast operation that prevents data loss in case of catastrophic failure
  • Byte-level delta replication minimizes bandwidth utilization and works in concert with NetApp dedupe functionality to reduce storage requirements
  • Cross-platform (Windows/NetApp)
  • Centralized configuration and management
  • Continuous availability – PeerSync creates file/directory replicas users can be redirected to if their local server is not available…no waiting for a restore procedure to complete!

Ready to learn more? Please click here to access additional information on PeerSync Backup Edition for Servers and fully functional evaluation software to try in your environment.

Posted in All Blog Entries, Data Backup


Will the concept of a headquarters location become extinct?

Reduce the disparity between headquarters and branch offices by enabling fast access to shared project files

Despite the perceived prestige of a headquarters, there is an emerging trend in large organizations to deemphasize headquarters locations vs. their branch office sites. This is easy to validate, just take a look at the websites of large law firms and other professional services organizations such as architecture and engineering firms. They list their many locations in very creative ways, but likely there isn't one that is immediately identified as the "headquarters" location. Why is this so?

When asked about this trend, a variety of responses were provided including:

  • "We are attempting to create a singular culture not only across locations, but also across multiple countries and continents."
  • "After all of our mergers and acquisitions, we decided to stop recognizing one particular location as the headquarters."
  • "We want to demonstrate to customers that the nearest branch office location is well equipped and equally capable of serving their needs."

While it is impressive to see an emphasis on customers, employees and cultures in these responses, there was little mention of how technology contributes to this phenomenon.

The Fading Headquarters Advantage

Not long ago, headquarters had one big advantage over other locations which was ease of collaboration. Most of an organization's management and project focused employees were located at the headquarters facility and it was easy for them to meet and access the latest information, including current project files. Over time, as these organizations grew and added branch offices, project teams became more dispersed which created a new set of challenges related to collaboration. Fortunately, technology advanced and improved the social aspects of collaboration for remote project teams via VoiP technology, instant messaging and web conference services like GoToMeeting, but most organizations still struggle with enabling high performance access to up-to-date project files at their branch locations like they have at headquarters.

The most common reason for this is that headquarters is the keeper of a central file repository which translates to remote teams having to access project files via a corporate wide area network (WAN) connection that is significantly slower than accessing files at headquarters. In fact, a recent survey on file sharing and collaboration practices conducted by Cadalyst reported that 85% of survey respondents in branch offices had WAN connection speeds of less than 100 Mbps, not even 1/10th the speed of a typical 1000 Mbps local area network (LAN) connection found inside most office facilities.

Simply put, this means that remote user productivity is throttled by the WAN connection. In many cases this leads to hundreds of thousands of dollars in lost productivity, even for small and medium size organizations, caused by users waiting for files to upload and download along with other file management issues like version conflicts and overwritten files - problems rarely experienced in a headquarters scenario.

Optimize Branch Office File Management and Productivity

Fortunately there are solutions out there to help you get a handle on these and other file management challenges experienced at your branch offices. A great place to start is downloading and reading The Ultimate Guide to CAD File Collaboration by Robert Green. The guide provides valuable information on how to identify file sharing and collaboration related challenges in multisite environments, and evaluate potential solutions along with their benefits and ROI.

Download a copy today and start bringing your branch offices up to speed with headquarters.







Posted in All Blog Entries, File Collaboration


How does your organization compare when it comes to managing and sharing CAD project files?

Read on to discover valuable insights and key benchmarks for AEC and IT professionals

It’s natural to be curious. Nearly everyone would like to know how they compare with others in their respective industry, especially when it comes to managing and protecting the information lifeblood of an organization – key project files and data.

Project files can be complex. In the architecture, engineering and construction (AEC) space they typically include CAD models, spreadsheets, bills of materials, PDFs, Adobe art files, photos, videos and more from a diverse array of applications. How effectively you manage sharing and collaboration with these files can have a major impact on the productivity of your organization’s professional talent, and overall project outcocadalyst survey imgmes.

Cadalyst, a leading AEC industry publication, recently dove into this topic by conducting a first-ever survey to capture CAD file sharing and collaboration practices, and experiences that design and engineering professionals face on a regular basis.

The results were remarkable. Thanks to over 500 respondents, the survey documented and brought to the surface a treasure trove of information and conclusions including:

  • Despite easy to recognize losses in productivity, organizations continue to rely heavily on outdated file sharing technologies that are not designed for the task
  • Users are dissatisfied with the ability of their organization’s wide area network (WAN) to support file collaboration
  • Challenges and risks associated with with file collaboration are pervasive and impact organizations of all sizes
  • And much more…

Find out how you compare. Click here to access the complete survey results comprising 25 pages of insights, data, and an executive analysis written by industry analyst Robert Green.

Download the CAD File Collaboration Survey

Posted in All Blog Entries, File Collaboration


Make Concurrent Design Your Secret Weapon for Competitive Advantage

Deploying the right CAD file collaboration infrastructure is the key to enabling Concurrent Design for distributed teams

As a technology concept, Concurrent Computing has been around for decades. The ability to execute several computations during overlapping time periods, instead of sequentially, enables a computation to make progress or even finish without waiting for others to complete. This typically results in a more efficient and faster computational process.

A similar concept known as Concurrent Design is rapidly being adopted as a process methodology by globally distributed project teams and “follow the sun” development models gaining popularity in design and engineering disciplines.

concdesignwp400By identifying tasks that project teams can deliver in parallel (concurrently) to accelerate product development or project delivery, Concurrent Design has the potential to radically change the design process while reducing project costs and associated completion times.

As organizations race to adopt concurrent design as a practice, they quickly discover some key chal­lenges including:

  • Cultural opposition to opening up the design and review process (lose control)

  • Dependency on efficient communication between design and production teams

  • Software and systems compatibility to enable design models and project files to be exchanged efficiently

While effective collaboration among project team members can address these challenges, existing IT infrastructure found in many organizations often fails to support the unique collaboration requirements of design and engineering power users, and project managers. These challenges are often magnified by inconsistent and slow WAN connections, and a lack of file version control when sharing and updating project files with team members at branch offices.

This is especially true with architecture, engineering, construction and manufacturing organizations where the ability of project teams to efficiently share and edit computer-aided design (CAD) and other project related files is rapidly becoming imperative across their respective industries.

Whether these organizations are trying to improve productivity, profitability or speed to market, effective CAD “file collaboration” requires technology infrastructure that supports the increasingly distributed nature of project teams while maintaining fidelity of CAD models and associated project files.

Learn more by reading Building Best-of-Breed File Collaboration Infrastructure for Concurrent Design, a white paper which analyzes collaboration requirements and challenges related to the Concurrent Design/Engineering methodology.

Posted in All Blog Entries, File Collaboration


When a Small Data Migration Project Turns Into a Big One

Without the right tools, complex file and folder structures will sidetrack data migration projects

Appearances can be deceiving. This popular idiom applies to many circumstances in IT as well, including data migration projects where overall size of the data set seems to get most of the attention. At a gathering of migration project managers you would likely hear a lot of chatter about high profile projects they recently managed quickly followed by how many Terabytes or even Petabytes were transitioned. The bigger the number, the better it is in their eyes. Over time these project managers have developed a number of best practices and grown comfortable witExpectDelaysh the tools they use to help them deliver projects that continue to expand in size and complexity.

So, what happens when a project estimate for migrating a modest amount of data indicates that it will take months to complete unless the customer is willing to incur a significant amount of system downtime? Even using different backup and data replication tools that were on hand did not change the calculations. In the real world, projects like this stall and get put on hold until more analysis can be completed. Paralysis by analysis sets in, project schedules slip, and nobody is happy.

Let’s take a look at a recent example of a migration project with similar characteristics that was dead in the water. Here is a quick overview of what the migration team was looking at:

  • Source: Windows Server with Dell RAID storage
  • 2.5TB of data
  • 100MB WAN connection
  • 64,000,000+ files
  • 300,000+ folders
  • Target: NetApp FAS3250 utilizing Data ONTAP 8.1.2 (7-Mode)
  • The application is mission critical and the customer requires minimal system downtime

On the surface this didn’t seem like a huge project. After ruling out factors such as the WAN connection, server performance and performance of the source and target systems, the project team determined that the bottleneck was the data replication tools they were trying to use to manage moving data from the source to the target system. They concluded that the large number of files and folders were not being processed efficiently by their respective scanning engines which caused a slow data transfer rate that ultimately delayed project estimates. It turns out that there are numerous solutions available for backing up or replicating large data sets, but most are performance challenged when confronted with a vast number of files, folders and objects.

The solution for this scenario was simply picking the right data management tool for the job, in this case it was PeerSync Migration Edition for NetApp. The difference maker was PeerSync’s high performance scanning and real-time replication technology that enabled the migration team to scan the data set one time and let PeerSync’s real-time replication capabilities maintain a data replica on the target system. Confident that the target system was being kept up-to-date, the project team was able to greatly simplify cutover planning and execution. Originally estimated to take several weeks, this project was delivered over a weekend.

Check it out yourself. Download the success story and see how TD Auto Finance dealt with their “Migration Impossible”. While we can’t guarantee that your next client will take you out for a post-migration celebration, we can confirm that the project team at TD Bank enjoyed theirs.

TD Case StudyLearn more.

Posted in All Blog Entries, Data Migration


Why You Should Care About File Locking

Distributed Global File Locking Powers Effective File Collaboration

When you first encounter the term “file locking” it really doesn’t generate much excitement. The concept is pretty basic, and you might even come to a quick conclusion that modern computing and data storage systems have fully addressed file locking years ago, or did they? Let’s take a closer look.

File locking is very useful; it restricts access to a computer file by allowing only one user or process access at a time which enables busy file systems to maintain order. The concept of file locking is ignored by most users until they have a problem related to it. One common global file locking pic800bscenario for this is when they join project teams that share and contribute to a common set of project files, a process often referred to as file collaboration.

Small teams at one location using a single file server rarely encounter challenges with file collaboration. But, as organizations evolve into extended enterprises project teams often need to be quickly assembled from within the organization and with partners, typically in different locations with their own file servers.

Without the proper IT infrastructure to support file collaboration, larger more distributed teams will eventually experience overwritten or corrupted files as users attempt to update and share them. Frustration and lost productivity quickly follow.

File Locking is a Key Technology for Managing Project File Collaboration

As project teams collaborate with shared files, the need to prevent simultaneous edits to these files quickly becomes an enterprise-wide requirement. Two approaches to consider for reliably managing file access and updates include:

  1. Content management systems with check in and check out functionality. These systems tend to be centralized and are acceptable for teams with fast connectivity that collaborate with smaller files, but tend to become problematic with larger files that eat up bandwidth and add to network congestion as they have to be fully uploaded and downloaded to the central repository each time they are updated.

  2. Global file collaboration systems. These systems overcome the performance challenges of centralized content management systems by utilizing distributed file storage with real-time replication and distributed file locking to enable up-to-date project files to be stored on servers where project teams are located.

Working in concert with real-time replication, distributed file locking, sometimes referred to as global file locking, is a critical file collaboration technology that averts the threat of simultaneous edits. By locking replicas of a file stored on several servers when a user opens the file for editing, distributed file locking prevents the file from being modified at the same time by other users. As soon as the file is closed or saved, changes to the file are then replicated to the other servers and the locks are released.

When implemented properly, distributed file locking works automatically and seamlessly in the background. And, unlike check in and check out procedures in content management systems, requires little to no additional effort from users. They continue to open, edit and save files on their local server and the system automatically replicates changes to the other servers.

Building Innovative File Collaboration Systems Since 2003

As one of the pioneers in developing global file locking systems for Microsoft server environments, Peer Software introduced its distributed file locking technology for file collaboration in 2003. Today, Peer's flagship product PeerLink is recognized as a class leading global file collaboration solution for Microsoft and NetApp environments. Enabling secure multi-site file collaboration for leading design applications including AutoCAD, Revit, Bentley, CATIA, Adobe and more, as well as productivity applications like Microsoft Office, PeerLink is flexible, offers great performance and is remarkably easy to implement and manage.

Click here to learn more about PeerLink or to request a trial version of PeerLink.

Posted in All Blog Entries, File Collaboration


Why Hybrid Cloud Adoption is a Self-fulfilling Prophecy

Understanding the Factors that Drive Cloud Technology Adoption

With the escalating price war between Amazon, Google, Microsoft and other cloud vendors, the cost of utilizing their services has dropped to the point where you have to pause and seriously evaluate how the cloud will impact enterprise computing efforts today and into the future. You can’t ignore it anymore.

One way to put this scenario into perspective is by comparing it to the technology adoption lifecycle curve made famous by Geoffrey Moore in his book Crossing the Chasm. In this contexttechadoptioncurve some examples of cloud innovators are the likes of Amazon, who built, consumed and marketed their cloud infrastructure along with Salesforce which pioneered and refined the concept of Software as a Service (SaaS). One could also argue that early cloud backup vendors such as Arsenal Digital and Mozy should also be included in this group. Early adopter customers of these services were driven by need; they didn’t have the resources to deploy software and infrastructure to support their business requirements, or were innovators themselves who appreciated the flexibility and low entry costs that cloud-based infrastructure and applications offered.

Interestingly, backup may have been the killer app that contributed the most to powering early acceptance of the cloud, simply because it helped users get comfortable with their data residing outside of their firewall on public cloud storage. Once this milestone occurs in an organization other opportunities to leverage the cloud are quickly investigated including:

  • Utilizing cloud services to test new applications
  • Moving legacy or lightly used applications to the cloud
  • Storage tiering – moving infrequently accessed data and archives to lower cost storage alternatives vs. the high performance NAS/SAN systems on a local network
  • Business continuity, also known as disaster recovery
  • Replacing aging servers and storage systems with lower cost cloud-based infrastructure

As organizations discover the advantages of a hybrid cloud architecture that enables them to comingle existing storage with lower cost cloud-based storage capacity, storage tiering and disaster recovery (DR) become the next logical steps to cloud adoption. Storage tiering is fairly straight forward as it can be managed via software, and most users don’t even notice that it is in place because they still have access to their files and data. DR is really interesting when it comes to the topic of cloud adoption as it is basically a Trojan Horse. It touches nearly everyone in an organization, and once cloud-based DR is implemented, overall cloud adoption becomes self-fulfilling as the learning curve for deploying mission critical applications and infrastructure in the cloud will be completed and the cloud becomes culturally accepted.

At this point, the organization will be in position to utilize the cloud for deploying new applications leading to increased agility, scalability and ROI. In fact, organizations that have gone down this path are reporting 60-80% cost savings, sometimes even more. As mentioned earlier, this is hard to ignore.

Ready to get started?

Moving to a hybrid cloud architecture requires three basic components:

  • Data management software to securely manage on premises and remote backup, archiving, storage tiering, and data replication to support storage centric productivity applications such as file sharing and collaboration
  • Storage hardware that is cloud-enabled and eases implementation of a hybrid architecture
  • A cloud infrastructure and computing platform vendor with a global network of data centers

A best-of-breed example of this combination is Peer Software and Microsoft StorSimple. We recently teamed up to deliver cloud-integrated storage (CiS) solutions that seamlessly integrate high performance on-premises storage with scalable, cost-effective public cloud storage services on Microsoft Azure.

Consolidating primary storage, archive, backup and disaster recovery into a CiS environment is a great way to leverage a hybrid cloud architecture, and best of all the price is right. Moving to the cloud is easier than ever starting today.

Contact the Peer team to learn more about the advantages of Cloud Integrated Storage.

Posted in All Blog Entries, Data Backup, Data Migration, File Collaboration


Just Say no to Technology Acronym Abuse

Increase Project Success Rates by Defining Requirements in Plain Language

Information technology suffers from a bad case of acronym abuse thanks to the rampant overuse of these abbreviations which are formed from parts of a word or phrase. A familiar example is DOS (disk operating system). Acronyms can help make communications easier, but are also a well-known source of confusion, and in worst-case scenarios hinder productivity. This happens because people may not be familiar with an acronym, and it is human nature not to ask questions out of fear that one might appear uninformed. This is usually discovered after a problem arises.

Here is a hypothetical example. ABC Industries is a rapidly growing company involved in both design and manufacturing. ABC has multiple locations aroundacronymsales the world, as well as partners and contractors. All of these locations are networked together, and collaborate on a regular basis.

Since both design and manufacturing are core competencies of ABC, they utilize the latest computer aided design (CAD) and computer aided manufacturing (CAM) technology. ABC's products are sophisticated, and their corresponding design and manufacturing files are large. This created a challenge related to sharing these files with other locations, including those outside of the corporate firewall.

At first, users tried to leverage existing infrastructure such as emailing design files or utilizing file transfer protocol solutions (FTP), but discovered that the files were too big for email, varying network quality caused FTP sessions to timeout and sometimes cause corruption, and the lack of version control meant that nobody really knew who had the latest version of a file. This became especially stressful as deadlines approached.

Realizing that existing infrastructure was not going to work, the team got together and started to research solutions, which is when the acronyms, and confusion started to appear. They had CAD/CAM systems in-house that could also be considered CAE (computer aided engineering) or maybe it should all be referred to as CAx. It looked like they could benefit from either a PLM (product lifecycle management) system or a PDM (product data management) system to ease their file sharing and collaboration challenges, and to improve their connectivity a WAN (wide area network) accelerator should be considered. All of these solutions had benefits, were costly and complex, and required a level of investment they determined was prohibitive.

Increase Project Success Rates by Avoiding Confusion From Acronym Abuse

Fortunately, the team recognized that vendor related spin, acronyms and technical jargon were only making things more complex and driving up potential solution costs. So they went back to the basics and simplified requirements as much as possible with no jargon and no acronyms. Here is a plain language example:

  • Project team members need to share, edit and maintain up-to-date files at multiple locations without worrying about version conflicts and file corruption
  • Power users need files available locally to maximize their productivity and should not have to depend on files stored at a central location on another continent
  • Any solution must be efficient with network utilization and resilient due to the varying quality of network connectivity between locations, including occasional outages
  • Must support multiple file types from multiple software vendors including design files and Office
  • Solution needs to be affordable both internally and to external partners and contractors
  • Solution should leverage existing infrastructure and not require additional hardware

Now, do these requirements sound familiar? Chances are your organization faces similar challenges no matter how large or small. Visit our CAD file collaboration solutions page to see how Peer has solved projectCADsolutions and design collaboration challenges for leading automotive, architecture, engineering, aerospace and national security organizations and agencies around the world, and can solve them for you as well.



Posted in All Blog Entries, File Collaboration


Have a Little Bit of Home While on the Road

Solving the Challenges of Home Directory Sync for Windows and NetApp

When on the road, nearly everyone likes to bring along something special from home. Pictures of loved ones, a favorite pillow, a book or entertainment for a long flight, and for users of information technology, they want to be able to conveniently and securely access their work files wherever they go. Their productivity depends on it.

Traveling users are not a new phenomenon. Whether corporate or government in nature, large organizations are likely to be geographically dispersed with multiple remote locations, and will have numerous users that regularly travel to these locations.

The Challenge

For both security and convenience purposes, more IT groups are adopting the practice of establishing user home directories which can include redirected folders such as the My Documents folder, and roaming user profiles on a file server at each user's assigned home office. This makes it convenient for users to login and store working files where they will be secured and backed up on a regular basis. But, what happens when these users go on theHate Waiting road?

A challenge comes into play when users travel between locations, especially to remote sites with marginal connectivity back to their home directory server. These users may experience slow access to their files, and visiting desktop users may have a difficult time logging in. And to make matters even worse, some users will compensate by storing and accessing their work files via local storage, flash drives or cloud-based storage services, each with their own security, governance and data integrity related concerns.

Ideally, you would prefer a solution that can maintain the security and convenience of your single sign-on system while providing a synchronized copy of a user's home directory on servers they can access locally at LAN speed when they travel. This will also save them the extra steps of trying to connect to their home office by VPN along with other gymnastics such as mapping a drive from the local machine they are working on to their home directory, all just to ensure the integrity and continuity of their home directory while on the road.

Solving Home Directory Sync

If this scenario sounds familiar, then please take a look at PeerSync Server Edition (SE), the industry standard for bi-directional file replication and synchronization in WAN environments. For user home directory synchronization, PeerSync SE ensures that users have local access to their home directories at LAN speed regardless of which facility they are accessing their information from, all while maintaining an exact replica on all servers, including the one at the home office. PeerSync SE is fast, seamless and proven, and most importantly for users visiting remote locations, it is resilient. It gracefully handles marginal and slow WAN connections, ensuring that user home directory data is up-to-date and available.

Powered by DFSR+ technology, PeerSync Server Edition is part of the PeerSync family of solutions that benefits from over 20 years of continuous refinement, and is ideal for high-volume server transactions on Windows and NetApp systems over networks using LAN and WAN links.

PeerSync SE is a great solution for supporting your road warriors, and helping to keep them at peak productivity. Click here to learn more about PeerSync Server Edition and innovative solutions powered by the PeerSync product family.

Learn more.

Posted in All Blog Entries


Turbocharge Clustered Data ONTAP Transitions with Data Interoperability

It happens more often than you think. After analyzing options, an organization determines that they need to upgrade their IT infrastructure, but then transition related momentum stalls when the realities of implementation are discovered. A case in point is with NetApp's latest data storage technology, Clustered Data ONTAP (cDOT), and current customers that want to replace some or all of their existing Data ONTAP 7-Mode systems (7-Mode) with cDOT.

Transitioning to a new cDOT system with minimal disruption to end-users is challenging enough when performing the upgrade from one existing 7-Mode system to one new cDOT system. But for NetApp customers with multiple 7-Mode systems, the ability to simultaneously upgrade all 7-Mode systems to cDOT is simply not possible, or even desired in some instances.

Data Interoperability Expands Transition Strategies and Accelerates Implementations

As cDOT is a relatively new platform from NetApp, selecting the right data management tools to assist with transition related tasks is the key to avoiding an all or nothing strategy and building project momentum by expanding available options when planning a transition to cDOT. So, how does this work in the real world?

By utilizing the PeerSync family of products to enable data interoperability between 7-Mode and cDOT systems, NetApp storage administrators with multiple 7-Mode systems can now opt for a gradual, phased approach for upgrading to cDOT. Pedatainteroppic1erSync is integrated with NetApp's FPolicy API for both 7-Mode and cDOT, and offers the unique capability to support both forward and backward compatibility between the two systems. In practical terms this means that real-time, continuous folder and file replication can occur from 7-Mode to cDOT, from cDOT to 7-Mode, or bi-directional between any combination of 7-Mode and cDOT. This provides incredible flexibility in transition planning and execution including:

  • Enabling cDOT backup to a 7-Mode system without utilizing SnapMirror and SnapVault, which are not backwards compatible from cDOT to 7-Mode. To solve this, we recommend PeerSync Backup Edition for NetApp which can be used to provide continuous data protection from cDOT to 7-Mode systems.

  • Branch offices with 7-Mode systems can backup to a disaster recovery (DR) site that recently upgraded to cDOT. PeerSync Backup Edition for NetApp can be used for continuous data protection from each branch office 7-Mode system to the DR site using cDOT.

  • Ability to stagger cutover of end-users. This is ideal for organizations that cannot simultaneously move users to a new cDOT system, and need both the old 7-Mode and the new cDOT systems in production. To enable this, we recommend PeerSync Migration Edition.

  • Peace of mind thanks to the ability of PeerSync to roll back data from a recently upgraded cDOT system to a 7-Mode system if unforeseen problems occur.

Learn More About PeerSync Backup and Migration Edition

PeerSync Backup Edition and PeerSync Migration Edition are members of the PeerSync product family, the industry standard for real-time file replication and synchronization in LAN/WAN environments. Benefitting from over 20 years of continuous enhancement, PeerSync is ideal for high-volume Windows Server, NetApp Data ONTAP 7-Mode, and NetApp Clustered Data ONTAP environments.

Click here to learn more about the PeerSync product family and to download fully functional evaluation software today.


Posted in All Blog Entries, Data Backup, Data Migration


How Can I Build a Business Continuity Solution on NetApp Clustered Data ONTAP Without OSSV?

We are hearing this question a lot since the introduction of Clustered Data ONTAP (cDOT), the latest data storage operating system from NetApp. As organizations migrate their data to this state-of-the-art, non-disruptive platform, many will want to adPeerSync Backupopt or continue with a centralized approach to backing up remote Windows servers and laptops. The benefits of centralized disk-to-disk backup are enormous, and with cDOT emerging as a preferred platform for this function more organizations are deploying every day.

Since cDOT is a new platform, you will need to evaluate and select data management solutions that are optimized for it, including a backup software alternative for the familiar OSSV. Fortunately, the answer to this is easy if you have seen PeerSync Backup Edition for NetApp in action.

Continuous Data Availability = Accelerated Business Continuity

Let's face it, rarely is there one particular characteristic that drives popularity and market adoption for a solution. It is usually a combination of how easy is it to implement and maintain, reliability, performance, support, overall value, and most important of all is trust. OSSV met those standards, and as organizations adopt cDOT they will need a solution that meets the same criteria, especially for protecting their most vulnerable data residing in remote locations connected by a WAN.

So, while you are considering new data protection solutions, especially for the cDOT environment, please keep a couple things in mind:

  • Has the developer been in business nearly as long as NetApp itself with solutions installed by thousands of customers around the world?
  • Is the solution integrated at the FPolicy level to enable real-time file event detection that powers replication across multiple platforms including Windows, Data ONTAP and cDOT?
  • Is the solution WAN friendly while nearly eliminating backup windows by only replicating changed blocks of a file in real-time after an initial scan and sync is completed?
  • Does the solution work in concert with NetApp's dedupe feature or try to implement one of its own?
  • Is the solution integrated with Microsoft VSS to back up open files and supported databases?
  • Does the solution accelerate business continuity after a remote system crash because up-to-date files and folder structures are available on the target system without requiring a restore procedure?

One solution that answers all of these questions is PeerSync Backup Edition for NetApp, a great alternative to OSSV. It delivers the same benefits when it comes to protecting data on remote Windows servers, and more. Best of all, it is ready for cDOT today.

Learn More About PeerSync Backup Edition for NetApp

Click here to learn more or download a fully functional eval copy today.


Posted in All Blog Entries, Data Backup


MRS – A Key to Simplifying Storage Migration Projects

Migrating data from system-to-system is a common activity for nearly every organization. Some migration projects are part of a larger initiative that includes application updates and complex data transformations, while most are driven by storage platform upgrades and utilize a migration methodology that looks similar to the one depicted in migration-workflowthe accompanying diagram.

Even though every organization has tackled data migration challenges at one point or another, you would think that each subsequent project would become easier, but why isn’t that the case?

With more organizations depending on their IT systems 24x7x365, the most visible, and despised pain-point in a migration project continues to occur when operations are disrupted by shutting down key information systems in order to move existing data to a new storage device or server, verify, orchestrate a cuCS-SilverSkyImaget over, and then re-verify. This has become almost unacceptable, but what alternatives are there? Wouldn’t it be nice to minimize or even eliminate system downtime during a migration project?

The answer is to replace shutting systems down to copy or mirror data, a disruptive migration process, with MRSMap source and target data, and then Real-time Sync. Powered by a combination of proven byte-level synchronization and CRC-based validation technology that keeps source and target data accurately replicated in real-time, MRS is a continuous process that operates in the background enabling migration project teams to maintain system uptime and ensure an orderly cut over with minimal impact on live system performance.

Learn More About MRS

Find out how email and messaging services provider SilverSky (formerly USA.Net) utilized PeerSync Migration Edition and MRS to migrate messaging files for millions of email users to a state-of-the-art NetApp storage platform for faster performance and increased user satisfaction.

Download the SilverSky Migration Success Story

Posted in All Blog Entries, Data Migration


The Hidden Costs of File Collaboration with SharePoint

The “centralization” trend has certainly received its fair share of attention, innovation and investment the past few years, and in many cases it has deserved it. The economies-of-scale promised by cloud computing and consolidating IT operations into fewer data centers has driven nearly every organization to commit to this strategy in some way. Riding the coattails of this trend is Microsoft SharePoint, by default the first document management and collaboration solution that organizations consider.WP-AvoidCmplxImage

While it is true that SharePoint 2010 can address a wide variety of needs, those who lack hands-on SharePoint experience may be surprised by SharePoint’s complexity. When it comes to business file sharing and collaboration, many organizations quickly discover that the cost and complexity of deploying and maintaining SharePoint overshadow its benefits.

Beware of Centralization Pitfalls - Especially Over a WAN

The centralized nature of SharePoint poses unique challenges for file collaboration users who are geographically dispersed and connected by a WAN, which quickly becomes not only a performance bottleneck, but also a single point of failure. Other key considerations include:

  • Infrastructure requirements
  • Fault tolerance considerations
  • Administrative and security requirements
  • SharePoint alternatives for business file sharing and collaboration users

Learn More in Less Than Five Minutes

Find out how you can determine if SharePoint is right for you and what alternatives are available by reading Avoiding the Complexity of SharePoint Deployments, a white paper compliments of Peer Software.

Posted in All Blog Entries, File Collaboration