One of the most prevalent characteristics of a data migration project is the significant extent of continual change that must be dealt with, often late into the programme. This is understandable because, despite best efforts to define and decide as much as possible up front, the business is generally not completely ready in the early stages. Not having been able to fully comprehend and frame the expected target landscape, decisions typically evolve organically as the programme proceeds. Another factor from a data management perspective is due to unknown data quality and modelling issues which are discovered down the road only when the data issues and relationships start becoming clearer.
Coupled with the other challenges covered in my earlier articles, this all comes down to a distinct requirement for well-defined, solid processes to manage continual change within the data migration stream. It is therefore worthwhile spending considerable time as early as possible to define the roles, ownerships, processes, artefacts and technologies that will be required to keep things under control. Particular attention must be given to the artefacts that record and help manage business and data rules across all migration activity, including not least extraction, exclusion, validation, cleansing, mapping and target front-end validation rules. Formal templates for these artefacts must be crafted, agreed and process proved, long before they are used in earnest when (controlled!) chaos becomes the norm down the road! It must be easy to not only keep track of all these changing rules, but to understand end to end impacts of the changes as regards data models overall. Complicating this all is the fact that work is taking place across various technologies and databases, as well as data areas such as source, landing, staging etc, and also on multiple platforms (eg Dev, QA, Prod, DR) and all of these must be kept in synch via a well-controlled release process.
Whilst Excel is frequently the tool of choice to record and manage business and data rules, it must be borne in mind that a high degree of automation should be built in to ensure alignment across multiple artefacts, because it is very challenging to manually keep up with the rate of change and resultant cross-dependency impacts, and of course anything manual is bound to be error prone.
Finally, it would make sense to call on professionals who have already thought through the requirements, made the mistakes, and have as part of their data migration arsenal a well-defined and proven set of templates to put to confident use as early as possible in the programme. The subject of Data Migration is vast, and I hope that this short series of five articles has assisted in understanding where to focus extra effort so as to ensure success to the migration, and in turn to the parent programme.
Sign up to receive regular Data Management articles and other notifications. *
*By clicking subscribe you consent to receiving communications from InfoBluePrint.