Client background
This company is a global reinsurance company providing its clients with a diversified and innovative range of solutions and services to control and manage risk.
The business challenge
The client faced a significant operational challenge: transitioning their legacy policy administration system to a new, more modern platform. The process required migrating a large volume of historical data from the old system to the new, while maintaining data integrity and ensuring minimal disruption to ongoing operations. The complexity of the data migration was compounded by the sheer volume — over 10 terabytes (TB) of critical business information — along with the need for streamlined processes to minimize downtime.
Strategy and solution
Baker Tilly developed a comprehensive data migration strategy to ensure the smooth transition of historical data. By focusing on the interoperability of the two systems, our team was able to design a strategic plan to handle the large-scale migration with precision. Key components of our solution included:
- Identifying critical entities: We identified the key entities between the existing and new systems, carefully planning the order of implementation to prioritize high-impact data.
- Creating hardware specifications: To accommodate the transfer of massive volumes of data, we developed detailed hardware specifications that enabled the seamless migration process, ensuring that the systems could handle large data volumes efficiently.
- Leveraging existing metadata models: By utilizing metadata models, we ensured the implementation of consistent design patterns and ETL automation across the various entities, which contributed to a smoother migration process with less room for error.
- Streamlining data migration: We designed a process capable of moving 10TB of data across two servers, employing the maximum number of parallel processes to accelerate the transfer and minimize downtime.
- Implementing a metadata repository: To ensure consistency and improve scalability in future migrations, we implemented a metadata repository. This repository provided a central framework for integrating the various data entities, which enabled the generation of integration code, resulting in a more stable and maintainable solution moving forward.