How to move from Legacy banking systems

How to move from
Legacy banking systems
Legacy migration projects are undertaken because they will support business objectives. There are costs to the
business if it goes wrong or if the project is delayed and the most important factor in ensuring the success of such
projects is close collaboration between the business and IT.
Author:
Vaios Vaitsis
CEO and Founder, Validata Group
© 2011 Validata Group. All Rights Reserved
Contents
Executive Introduction .......................................................................................................... 3
Data migration is a business issue ......................................................................................... 4
Lessons learned..................................................................................................................... 5
Data profiling .................................................................................................................... 5
Data quality ....................................................................................................................... 5
Methodology..................................................................................................................... 7
ETL and Reconciliation Workflow ...................................................................................... 9
Best Practices Techniques ................................................................................................... 11
Working with the business model.................................................................................... 11
Extracting the data from the source ................................................................................ 12
Full or delta migration ..................................................................................................... 13
Transforming data ........................................................................................................... 13
Currency conversions ...................................................................................................... 14
Handling code changes .................................................................................................... 14
Loading data.................................................................................................................... 15
Scheduling and job sequencing ........................................................................................ 15
Trap and handle errors caused by process failure ............................................................ 16
Minimizing build and maintenance requirements ............................................................ 16
Optimizing performance.................................................................................................. 16
Data Migration Testing .................................................................................................... 16
Data governance ............................................................................................................. 17
Reconciliation.................................................................................................................. 18
Initial Steps .............................................................................. Error! Bookmark not defined.
Summary............................................................................................................................. 25
2
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Executive Introduction
This paper will discuss why data migration is important to your business and why the actual
process of migration needs to be treated as a business issue, what lessons have been
learned in the last few years, the initial considerations that organisations need to bear in
mind before undertaking a data migration, and best practices for addressing these issues.
3
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Data migration is a business issue
If you’re migrating from one application environment to another, implementing a new
solution, or consolidating multiple databases or applications onto a single platform (perhaps
following an acquisition), you’re doing it for business reasons. It may be to save money or to
provide business users with new functionality or insights into business trends that will help
to drive the business forward. Whatever the reasons, it is a business issue.
Historically, certainly in 2007, data migration was regarded as risky. Today, nearly 62% of
projects are delivered on-time and on-budget—there is much less risk involved in data
migration, provided that projects are approached in the proper manner.
So, there are risks, and 30% of data migration projects are delayed because of concerns over
these and other issues. These delays, which average approximately four months (but can be
a year or more), postpone the accrual of the business benefits that are driving the migration
in the first place. In effect, delays cost money, which is why it is so important to take the risk
out of migration projects.
It is worth noting that companies were asked about the top three factors affecting the
success of their data migration projects. By far, the most important factor was “business
engagement” with 72% of organisations quoting this as a top three factor and over 50%
stating this as the most important factor. Conversely, over one third of companies quoted
“lack of support from business users” as a reason for project overruns.
To summarise: data migration projects are undertaken because they will support business
objectives. There are costs to the business if it goes wrong or if the project is delayed, and
the most important factor in ensuring the success of such projects is close collaboration
between the business and IT. Whether this means that the project should be owned by the
business—treated as a business project with support from IT—or whether it should be
delegated to IT with the business overseeing the project is debatable, but what is clear is
that it must involve close collaboration.
4
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Lessons learned
As previously noted, there has been a significant reduction in the number of overrunning or
abandoned projects. What drove this dramatic shift?
This shows a very significant uptake in the use of data profiling and data cleansing tools, as
well as more companies now using a formal methodology. With respect to a methodology,
there has been a significant move towards formalised approaches that are supplied by
vendors and systems integrators rather than developed in-house. While we cannot prove a
causal relationship between these changes and the increased number of on-time and onbudget projects, these figures are, at the very least, highly suggestive.
Since these are lessons that have already been learned we will not discuss these elements in
detail, but they are worth discussing briefly.
Data profiling
Data profiling is used to identify data quality errors, uncover relationships that exist between
different data elements, discover sensitive data that may need to be masked or anonymised
and monitor data quality on an on-going basis to support data governance. One important
recommendation is that data profiling be conducted prior to setting your budgets and
timescales. This is because profiling enables identification of the scale of the data quality,
masking and relationship issues that may be involved in the project and thus enables more
accurate estimation of project duration and cost. Exactly half of the companies using data
profiling tools use them in this way, and they are more likely to have projects that run to
time and on budget, by 68% compared to 55%.
While the use of data profiling to discover errors in the data is fairly obvious, its use with
respect to relationships in the data may not be.
Data profiling tools need to be able to represent data at business level so that they support
use by business analysts and domain experts, as well as in entity-relationship diagrams used
by IT. Ideally, they should enable automatic conversion from one to the other to enable the
closest possible collaboration.
Data quality
Poor data quality can be very costly and can break or impair a data migration project just as
easily as broken relationships. For example, suppose that you have a customer email address
with a typo such as a 0 instead of an O. An existing system may accept such errors, but the
5
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
new one may apply rules more rigorously and not support such errors, meaning that you
cannot access this customer record. In any case, the end result will be that this customer will
never receive your email marketing campaign.
Poor data quality or lack of visibility into data quality issues was the main contributor
(53.4%) to project overruns, where these occurred.
6
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Methodology
Having an appropriate migration methodology is quoted as a critical factor by more
respondents than any other except business engagement. It is gratifying to see that almost
all organisations now use a formal methodology. For overrunning projects, more than a
quarter blamed the fact that there was no formal methodology in place.
In terms of selecting an appropriate methodology for use with data migration, it should be
integrated with the overall methodology for implementing the target business application.
As such, the migration is a component of the broader project, and planning for data
migration activities should occur as early as possible in the implementation cycle.
V-Conn adopts the following migration approaches depending on the business goals,
measured risks, migration volumes and technical infrastructure:
Big Bang Migration
Following this approach, migration and reconciliation of all bank branches and modules
happens in one weekend. V-Conn is used both for migration processing and for building the
technical and financial reconciliation reports. It can also be used as a platform for building an
archiving solution for the legacy system historical data taking advantage of the relational
information between legacy and TEMENOS T24™ objects established during data
transformation. The time needed for the migration of the full volumes is a challenge when
following the Big Bang approach, usually because of poor hardware performance.
Because of its flexibility V-Conn offers a variety of approaches to mitigate performance risks.
For example, it can be used to migrate static data in advance, and the delta and financial
data during the migration weekend.
Validata is also a data management solution which is flexible, fast and easy to use enabling
truly thorough testing for your Temenos T24 project.
Validata’s Archiving approach enables you to achieve the best possible performance from
your databases by archiving information to your chosen criteria-without losing access to
your data.
A full set of audit trail and version control features is available. Routines and data records
are stored in a centralized database that keeps track of all updates, supports multiple check
out and parallel development and has a full set of release management features.
7
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Phased Migration
There are different factors that can make the Big Bang approach problematic – there could
be issues with training of the employees, data volumes can be too large to be migrated in
one weekend, or the long time needed for the full implementation of the system. In such
cases, V-Conn supports a ‘progressive’ (phased) migration approach, where migration of
data happens in several runs over several weekends without interrupting the bank’s
workday operations. Using V-Conn as a staging area allows the migration of the next portion
of data, and migrating only the delta for the data already existing, i.e. only additions or
amendments accumulated since the last migration.
Phased migration consists of two different approaches:

Branch-based migration – migration is done branch-by-branch; all modules for a
single branch are migrated over one weekend.

Module-based migration – migration per product is done for all branches in a single
weekend. In this case, V-Conn’s staging area plays an important role for the delta
migration of the modules migrated during the previous weekend(s).
8
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
ETL Approach
The migration process would be managed in V-Conn using Jobs and Tasks objects. Each task
represents a single atomic operation of one of the following types: Extract and Validate,
Transform , Load and reconcile(additionally there will be a “custom” job type). Migration
jobs could consist of several tasks in limitless combinations. Whenever a job or a task starts
executing its current definition will be stored in the DB, for audit purposes. Also the start
and end time and the user that started the operation will be stored. All records
inserted/updated from a task will keep a pointer to that task for the needs of the audit trail
and the reporting. All tasks and reconciliation reports are reusable and they can be re executed on demand ( or to be scheduled).
In more details, the data flow follows the below order:
1. Data is extracted via Validata adapter. In case the legacy data is provided to the
migration team in the form of CSV file, then CSV Adapter is to be used. In case it is
stored in a MS SQL or ORACLE DB, then SQL Adapter will be used. In case the legacy
system is a version of T24, we could use the T24Adapter.
9
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
2. Extracted data is passed to the Validation engine. Records which fail validation are
considered to be rejected. They are “stopped” and do not continue to the next
stage. Records which have successfully passed validation are passed onto the
Transformation engine.
3. Successfully validated data is passed to the Transformation engine. Records which
fail transformation are “stopped” and do not get imported in the staging area.
Successfully transformed records are stored in the staging area.
4. Generation of Validation reconciliation reports.
5. Generation of Transform reconciliation reports.
6. The transformed data stored in staging area is exported and sent to the target
system via Validata Adapter (T24Adapter)
7. Generation of Load reconciliation reports.
10
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Best Practices Techniques
Working with the business model approach
The meta-data modelling
and
maintenance
is
important part any data
migration
process.
It
requires the data structure
of all involved systems to
be extracted and stored
into a central repository.
There are different ways to model the meta-data in Validata SAS. One possible solution is to
define the entities and their attributes manually using the Business Model Manager UI.
Another option is to extract and load the meta-data information using the supplied Validata
tools.
11
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Depending on the data source, the following tools and approaches could be used:
 Relational Databases (MS SQL Server, Oracle) – use Validata ER Metadata
Extractor utility to generate XSD files for each entity and load these XSD files
into Validata using the Validata Metadata Updater utility
 CSV files – use the Validata CSVtoXSD utility to generate XSD files for each
entity and load these XSD files into Validata using the Metadata Updater
utility
 TEMENOS T24TM – use the Validata Metadata Updater utility to directly
extract the meta-data from the TEMENOS T24TM system and load it into
Validata SAS
Although the manual metadata creation and maintenance sounds acceptable when the
metadata volumes are relatively low it is still a better practice to use whenever is possible
the automatic approach in order to minimize the human mistakes.
After the meta-data is created initially, each consequent update of this meta-data is tracked
automatically by Validata SAS. The differences in the structure of each entity in different
points in the time can be analyzed using the Time Variance functionality.
It is a good practice after each meta-data update to check the differences in the
source/target systems metadata using the Time Variance functionality in order to pinpoint
potential issues due to added, removed or renamed attributes.
Extracting the data from the source approach
V-Conn can extract data from multiple sources using components called Adapters. These
components provide online or offline data extraction.
The most frequently used online extraction adapters are:
 SQL Adapter – provides data extraction from relational databases (MS SQL
Server, Oracle, MS Access)
 TEMENOS T24™Adapter – provides data extraction from TEMENOS T24™
systems using the Open Financial Services (OFS) interface
The offline extraction is usually performed with one of the following adapters:
 CSV Adapter – provides data extraction from CSV (comma separated values)
files
12
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
 FLT Adapter – provides data extraction from fixed length (usually called
ASCII) files
V-Conn framework allows creation of additional adapters when needed. However, in most of
the cases the off-line adapters provide the required support for data extraction. It is better
practice to work with CSV files, as the Fixed Length (ASCII) files structure is more error
prone.
Full or delta migration approach
Depending on the requirements, migration could be performed as one-off process or it could
be a frequently performed operation.
When data migration is performed frequently it is a better practice to initially perform a full
data migration followed by consequent delta migrations.
Full data migration is performed by loading all data from the source systems and then
exporting all data to the target systems.
Delta migration is extracting only the changed and new records from the source system and
then is loading only them into the target system.
Usually the delta extraction is performed by defining a filter in the import tasks specifying
that only data with timestamp of today should be extracted.
When the source systems do not support such timestamp markers, V-Conn is capable of
finding the differences between the extracted data and the data already stored in V-Conn as
part of previous extractions. This is performed using the consolidation engine, which
updates the already existing instances in Validata, creates the new ones and marks them as
updated/new instances that have to be loaded later into the target system.
Transforming data approach
The data transformation is defined using Validata Mapping Schemas which are storage for
mapping and business transformation rules.
The mapping schemas are created using the Translation Wizard interface which allows the
users to define the transformation rules without writing a single line of code. This allows
non-technical people like business analysts to define, maintain and supervise the data
mapping and transformation.
13
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Data transformation is handled using mapping functions supplied with V-Conn or external
functions created for the specific needs. In most of the cases the pre-packaged functions are
sufficient to build the data transformation process.
These functions are specified through the Translation Wizard UI in an intuitive way without
involving technical knowledge.
The transformations can be performed either as part of the data extraction process or as
part of the data loading process or both.
It is usually a good practice to use V-Conn as mirror of the data that has to be loaded into
the target system. This is achieved by performing all transformation as part of the extraction
process and storing the data in V-Conn in exactly the same way as it should be stored into
the target system. Later this data is loaded into the target system without any
transformations and can be imported back when reconciliation is needed.
Currency conversions approach
In some cases it might be required to perform currency conversion as part of the data
migration process.
These conversions could be either driven by off-line conversion tables or through a live
connectivity to exchange rate management systems. The exchange rates from such systems
can be accessed either through SQL stored procedures or through Web Services.
Handling code changes approach
Common task part of the data migration process is to handle changing codes.
For example in the source system the customer status could be represented as A,B and Z,
while in the target system it could be 1,5 and 999.
This should be implemented using the Lookup function and conversion tables. The
conversion tables are defined using CSV format which makes them easy to create and
maintain. The conversion table for the above example would be:
ReturnValue,InputValue1
1, A
5, B
999,Z
14
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Loading data approach
The data is loaded into the target system using the already mentioned online and offline
adapters. All these adapters are bidirectional and can be used not only to extract, but also to
load data.
It is a good practice to always use an adapter that is targeting the high level interface of the
target system, instead of trying to push the migrated data through a low level interface like
direct database connection.
As an example in case of loading data into TEMENOS T24™, it is better to use the TEMENOS
T24™ OFS adapter, instead of trying to load the data directly into JBase. It is also important
to use the proper version of each application in order to pass the data through the proper
validation.
Loading the data through high level interfaces provides validation of this data before it is
stored into the target system. This way the data quality and the proper functioning of the
target system with the loaded data can be ensured.
The referential integrity is evaluated as part of the target system analysis in order to perform
proper data loading. For example the loading of the Account static data should be
performed only after the loading of the Customer data as the Account records are
dependent on the existence of the relevant Customer records.
In order to handle self-references the data loading may have to be executed as two staged
process. For example if the Customer records have self-referenced due to a field Empowered
Person, the records should be first exported with the data for this field (without the
relational data) and as a second step only this field (the relations) should be exported.
Scheduling and job sequencing approach
The data migration jobs usually consist of several import and export tasks which have to be
performed at specific time and in a specific order.
V-Conn provides job scheduling functionality with the ability to execute at specific time a
single task or a list of consequent tasks. The migration jobs can be scheduled for single
execution at specified date and time or as repetitive executions at specified time and day of
the week (i.e. 21:00h every Monday, Tuesday, Wednesday, Thursday and Friday).By this
approach we can also keep the environments synchronised especially for the period that the
2 systems will run in parallel, till final drop.
15
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Trap and handle errors caused by process failure approach
The errors caused by the import/export tasks are stored in two different formats – XML and
CSV.
The XML format allows third party components or applications to load the errors gathered
by V-Conn as part of the execution and analyze them further.
The CSV format allows easy exchange between team members and is preferred by business
analysts using MS Office products.
Minimizing build and maintenance requirements approach
The process of requirements gathering and technical specifications requires much less
overhead compared to the traditional data migration solutions as significant part of the
specifications is stored into V-Conn (meta-data definitions, mapping and transformation
rules, etc.).
The history and audit trail functionality keeps the track on the specifications changes and
allows the migration project to have always synchronized specifications and build, unlike the
in-house built systems where usually the specifications and the built go out of synch very
easily.
Optimizing performance approach
V-Conn provides execution reports showing the duration of each import/export task as part
of the data migration job. This information allows the administrators to tune the source and
the target systems in order to improve the overall migration performance.
Data Migration Testing approach
The data migration testing is performed as:
 Unit Testing - testing the metadata definitions and mapping rules
 Integration Testing – testing the whole migration process, including the
external tools and dependencies and the manual steps involved
 Data Quality Testing – testing the quality of the data extracted from the
source systems using the rules defined in the target system
This testing process ensures that the changes required in the core application modules in the
target system will be identified early, even before starting functional testing on the target
system.
16
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Data governance approach
As noted, it is imperative to ensure good data quality in order to support a data migration
project. However, it is all too easy to think of data quality as a one-off issue. It isn’t. Data
quality is estimated to deteriorate between 1% and 1.5% per month. Conduct a one-off data
quality project now and in three years you will be back where you started. If investing in
data quality to support data migration, plan to monitor the quality of this data on an ongoing basis (see box) and remediate errors as they arise.
In addition to addressing the issues raised in the previous section, consider using data migration as a springboard for a full data governance programme, if one is not used already.
Having appropriate data governance processes in place was the third most highly rated
success factor in migration projects, after business involvement and migration methodology.
While we do not have direct evidence for this, we suspect that part of the reason for this
focus is because data governance programmes encourage business engagement. This
happens through the appointment of data stewards and other representatives within the
business that become actively involved not only in data governance itself, but also in
associated projects such as data migration.
In practice, implementing data governance will mean extending data quality to other areas,
such as monitoring violations of business rules and/or compliance (including retention
policies for archived data) as well as putting appropriate processes in place to ensure that
data errors are remediated in a timely fashion. Response time for remediation can also be
monitored and reported within your data governance dashboard.
In this context, it is worth noting that regulators are now starting to require ongoing data
quality control. For example, the EU Solvency II regulations for the insurance sector mandate
that data should be “accurate, complete and appropriate” and maintained in that fashion on
an ongoing basis. The proposed MiFID II regulations in the financial services sector use
exactly the same terminology; and we expect other regulators to adopt the same approach.
It is likely that we will see the introduction of Sarbanes-Oxley II on the same basis. All of this
means that it makes absolute sense to use data migration projects as the kick-off for
ongoing data governance initiatives, both for potential compliance reasons and, more generally, for the good of the business.
17
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Reconciliation approach
One of the challenges in any data migration project is the process of comparing source data
with migrated data in order to ensure that data have been converted and migrated properly
and that no data have been left behind or altered in any way. This process is known as data
reconciliation. Although reconciliation initially is seen as a simple activity, due to a number
of data transformations necessary to accommodate the internal data structures of the target
system, this activity is complex. Additionally, if during the data migration there is a need to
consolidate legacy data the reconciliation complexity is increased significantly.
Reconciliation through Validata can be done on Technical and Business level. Each ETL
Stage can be reconciled or the reconciliation can happen on the overall process (E2E).
Figure 7: Reconciliation Flow Diagram
18
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
V-Conn makes reconciliation simple by providing a portfolio of pre-defined reconciliation
reports for every migration step i.e. Extraction, Transformation and Loading. Additionally, it
is bundled with a number of financial reconciliation reports that could be used for financial
reconciliation as well. In case where additional reports are necessary, V-Conn comes with a
reconciliation engine and reconciliation report writer that enables business and IT to define
new reports as necessary simply without coding and thus dramatically reducing time and
money comparing to traditional solutions such as custom made reconciliation programs and
manual reconciliation.
Different reconciliation reports are developed, and can be exported to Excel, to satisfy
reconciliation requirements as follows:
19
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Multistage End To End Reporting:
This report category aims to allow quick check of the total number of records processed
successfully or unsuccessfully in the various migration & reconciliation stages.
Figure 1: Four Stage (Import, Transform, Export, Reconcile) Reconciliation report
The four stage reconciliation report allows reconciliation on financial objects like
Contingent/Non Contingent entries, Total debit/Total credit entries, GL, Journal etc.
Reconciliation stage allows Extraction of all data in T24 and compares it with the data in
Validata and it is fully automated.
20
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
21
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Error Reports
This report category aims to allow easy review of the errors encountered during the
different stages. There are three sub-categories:
Error – Summary
Error – Consolidated
Error – Detailed
Passed Reports (Quality Reports)
This report category aims to allow detailed review of the migration data record by record
and field by field for the successful records.
Statistics Reports
Statistics Reports achieve specific reconciliation requirements i.e. financial. They allow
grouping and summing up by specific criteria. see example below:
Audit approach
Audit trail report gives an overview of the tasks completion and results. It can drill down till
field level.
22
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Approach:
We have discussed what needs to be considered before you begin. Here, we will briefly
highlight the main steps involved in the actual process of migration. There are basically four
considerations:
1. Development
Develop data quality and business rules that apply to the data and also define
transformation processes that need to be applied to the data when loaded into the target
system.
2. Analysis
This involves complete and in-depth data profiling to discover errors in the data and
relationships, explicit or implicit, which exist. In addition, profiling is used to identify
sensitive data that needs to be masked or anonymised. The analysis phase also includes gap
analysis, which involves identifying data required in the new system that was not included in
the old system. A decision will need to be made on how to source this. In addition, gap
analysis will be required on data quality rules because there may be rules that are not
enforced in the current system, but compliance will be needed in the new one. By profiling
the data and analysing data lineage, it is also possible to identify legacy tables and columns
that contain data, but which are not mapped to the target system within the data migration
process.
3. Data Cleansing
Data will need to be cleansed and there are two ways to do this. The existing system can be
cleansed and then the data extracted as a part of the migration, or the data can be extracted
and then cleansed. The latter approach will mean that the migrated system will be cleansed,
but not the original. This will be fine for a short duration big-bang cut-over, but may cause
problems for parallel running for any length of time. In general, a cleansing tool should be
used that supports both business and IT-level views of the data (this also applies to data
profiling) in order to support collaboration and enable reuse of data quality and business
rules, thereby helping to reduce cost of ownership.
23
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
See below data cleansing example using specific validation rules:
4. Testing
It is a good idea to adopt an agile approach to development and testing, treating these
activities as an iterative process that includes user acceptance testing. Where appropriate, it
may be useful to compare source and target databases for reconciliation.
These are the main four elements, but they are by no means the only ones. A data
integration tool (ETL - extract, transform and load), for example, will be needed in order to
move the data and this should also have the collaborative and reuse capabilities that has
been outlined.
A further consideration is that whatever tools are used, they should be able to maintain a
full audit trail of what has been done. This is important because part of the project may need
to be rolled back if something goes wrong and an audit trail provides the documentation of
what precisely needs to be rolled back. Further, such an audit trail provides documentation
of the completed project and it may be required for compliance reasons with regulations
such as Sarbanes-Oxley. Other important capabilities include version management, how data
is masked, and how data is archived.
Clearly it will be an advantage if all tools can be provided by a single vendor running on a
single platform.
24
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.
Summary
Data migration doesn’t have to be risky. The adoption of appropriate tools, together with a
formal methodology has led, over the last four years, to a significant increase in the
successful deployment of timely, on-cost migration projects. Nevertheless, there are still a
substantial number of projects that overrun—more than a third. What is required is careful
planning, the right tools and the right partner. However, the most important factor in
ensuring successful migrations is the role of the business. All migrations are business issues
and the business needs to be fully involved in the migration—before it starts and on an ongoing basis—if it is to be successful. As a result, a critical factor in selecting relevant tools will
be the degree to which those tools enable collaboration between relevant business people
and IT.
Finally, data migrations should not be treated as one-off initiatives. It is unlikely that this will
be the last migration, so the expertise gained during the migration process will be an asset
that can be reused in the future. Once data has been cleansed as a part of the migration
process, it represents a more valuable resource than it was previously, because it is more
accurate. It will make sense to preserve the value of this asset by implementing an on-going
data quality monitoring and remediation programme and, preferably, a full data governance
project.
25
No part of this document may be reproduced or transmitted in any form or by any means, for any purpose,
without the express written permission of Validata Group.