Your organization is kicking off a SharePoint unstructured data migration of documents, records and images from on-prem to cloud, or from a legacy unstructured data repository to SharePoint.

A lot is riding on this migration project and if you miss the deadline, a lot of people are going to be very unhappy. 

A lift and shift migration of unstructured data seems pretty straight forward, but there are five common exceptions to watch out for that can kill your project schedule and budget. In a recent Cloud Service Alliance survey, 74% of survey respondents were not able to meet their SharePoint data migration deadlines.

So what did the 26% of successful respondents do differently? 

5 Common SharePoint Data Migration Exceptions to Watch Out For

Here are a few SharePoint migration killers to watch out for as you embark on your SharePoint data migration journey. 

Minimize document version exceptions

Migrating document versions is alot harder than it sounds. One frequent assumption is that all of the original master copies of documents, records and images are currently in a single repository, or a few select repositories and there are no duplicates of the versioned content.

Since Covid and the subsequent focus on remote working, convenience copies and modified copies of versioned content can be distributed across MS Teams sites, personal drives, email attachments, shared drives, OneDrive and the Enterprise Content Management system (ECM). The amount of unstructured data being created today is staggering.

Recommended Best Practice:  Use automation to crawl, find and create some visualizations regarding what versioned content resides in which repositories and identify duplicates and obsolete content that can be disposed of before migrating. Good automation solutions can migrate up to 5 versions of a document.

This will put you in a much better planning position by having full visibility into the quantified volume, file types and distribution of versioned content across all repositories and sets the migration project up nicely for the next step, eliminating duplicate content before moving them into the new SharePoint repository.

Eliminate duplicate and obsolete content exceptions

Moving terabytes of duplicate and obsolete files to the new SharePoint repository will just make your migration project take longer, potentially confuse user search queries and make retention rules much harder to manage.

Using file name, file size, classification/meta-tag/categorization to identify duplicates or obsolete files is an inexact science, since users may rename content and store it in completely different repositories with different classification/meta-tag and categorization criteria. Preparing and cleaning that content manually typically requires a lot of manual intervention and oversight, which makes SharePoint migration projects take longer and is prone to errors.

Recommended Best Practice: Use automation to read the binary content of each document or record and then assign a unique hash code to each one. This will allow your team to quickly and easily identify duplicates no matter what they are named, or in which repository they reside, as well as apply configurable rules to eliminate duplicates and obsolete content.

Eliminating Redundant, Obsolete and Trivial (ROT) content typically shrinks the migration content by 25% – 50% dramatically accelerating your migration process and reducing the exception handling administrative burden on your team by 2X – 3X.

Reduce classification, meta-tag and categorization exceptions

It is unlikely that your users have manually classified, meta-tagged and categorized all documents, records and images 100% correctly in the existing repositories. And if the metadata associated with that content is incorrect, it will make it harder to find in the new repository, which will lower SharePoint user adoption and encourage users to go back to making convenience copies, undoing a lot of your migration teams’ hard work.

Recommended Best Practice: Inheriting or including existing meta-tags, classifiers and categorization as part of the migration is ok, but even better would be to leverage the binary content crawl to validate and update existing metadata based on the actual binary content of each document, record and image.

This will ensure that content shows up in the new SharePoint repository with the correct metadata assigned and make it much easier for users to search and find content, improving user adoption and reducing exception management & change management resources required.

Cut down on failed content migration exceptions

Using file names, file size and classification/meta-tag/categorization to confirm if a file was migrated successfully does not necessarily guarantee success.

Users often move content, change folder names and include special characters which can cause the migration process to fail. This often requires significant manual intervention to fix and further delays the SharePoint migration process.

Recommended Best Practice: Automation can read the binary contents of the documents, records and images and assign them a unique hash code before they are migrated and then confirm the unique hash code in the destination repository and read the binary contents again after migration to ensure that 100% of the content was migrated intact.

This dramatically reduces the exception handling admin burden on the migration team, making the project go faster and ensures that ALL content successfully arrives in the new SharePoint repository.

Transition your SharePoint data migration project into a repeatable, sustainable process

One of the most common downfalls of SharePoint migrations is to treat them as a project that once completed will result in a stable future state in the new SharePoint repository. The problem with this approach is that it doesn’t account for what happens when users create new content and store it in exactly the same distributed legacy repositories, partially categorized and often duplicated, with or without proper governance and retention rules applied.

The ‘project’ approach almost guarantees that 18-24 months later, another SharePoint clean-up/migration project will be required.

Recommended Best Practice: Leave the same automation in place that you used for the initial SharePoint migration. As new content is created the automation can read the binary contents of the documents, records and images, assign the appropriate metadata, eliminate duplicates, implement the correct governance & retention rules to the new content and ensure it is stored in the correct repository.

This approach ensures that the SharePoint migration and clean-up work that resulted in a pristine content management state in the new repository, is maintained in a repeatable and sustainable manner, with little or no user disruption or IT administrative burden.

Full Steam Ahead!

A SharePoint unstructured data migration might seem like a daunting project, but with a few best practices and some automation help, you can ensure that your migration is accomplished on-time and on budget with clean, properly classified and tagged content, that is easily searchable and has the correct governance and retention rules applied.

Shinydocs can take the pain out of your SharePoint migration. Our solution crawls, cleans, classifies and helps migrate your information with zero user disruption – users can even work on documents as they are migrated.

Transform your unstructured data into business results in less than 30 days.

Download our Shinydocs Content Landscape Assessment Two-Pager for details on pricing, timeline, and what you’ll need to get started.

Scroll to Top