Data center migration

A data center migration or the connection of two infrastructures is common. It may be needed for business continuity, modernization, or decommissioning.

Whatever the situation, a major technical challenge appears.
How to maintain production and migrate data with a minimum of constraints?
The issue concerns both operations teams and customers. And it is important to avoid any changes in application configurations.

Case study

A major hosting provider in France decided to rationalize its infrastructures. Its goal was to consolidate and then decommission a small edge data center in favor of larger sites.

This data center hosted a large amount of data and several critical applications. For these, any production downtime was almost impossible to consider.

The goal of the operation was to design an architecture able to migrate data safely. It also had to preserve private and public IP addresses between the sites. Finally, the objective was to minimize technical interventions.

For reasons of confidentiality, all information in this article is hypothetical.
Only the technical concepts and generic network notions have been kept to illustrate the topic.

Technical Context

Both data centers are equipped with Cisco technologies (Nexus, Catalyst and ASR). For simplicity, we will call them DC A and DC B. Both are LIR and have their own IP prefixes, ASn and IP transit providers.

The distance between them remains reasonable. The project had no specific latency constraints. We were therefore able to use the available technologies to design a suitable transition architecture.

Requirements

The project had to meet several key requirements:

  • Preserve existing configurations: IP addresses (public and private) as well as VLAN IDs.
  • Enable progressive migration: workloads had to be moved gradually, with multi-site operation during the transition.
  • Operational simplicity: network operations had to remain accessible to an administrator not expert in multi-site infrastructures.
  • Minimal downtime: service continuity had to be ensured with very limited downtime.
  • Redefinition of the Internet transit point: data center B had to become the main transit, allowing the termination of subscriptions on data center A.
  • Management of RIPE resources: the resources had to be transferred to the LIR organization of data center A.

First step: connect the two sites

The first action was to connect the two data centers with two distinct Lan-to-Lan links, each following a different path. These links had to support both VLAN encapsulation and the transport of LACP frames.

To meet these requirements, the infrastructure provider Covage was chosen, offering the necessary guarantees in terms of quality and resilience.

Second step: exchange public IP prefixes

Preserving VLANs and internal IP segments is usually quite simple to achieve. However, the real challenge was the migration of public IP addresses with their workloads, while ensuring routing continuity between two different ASn.

To address this issue, the choice naturally went to the BGP routing protocol, perfectly suited for this type of multi-site topology.

The two AS now exchange their respective prefixes through the two Lan-to-Lan links.
The routers of data center B announce a full table to the routers of data center A, while the latter advertises its X/22 prefix to the Internet via data center B.
From that moment, DC B became the main IP transit provider for data center A.

Then, for each workload migration, a /32 route is created in the backbone of data center B. Due to the principle of longest prefix match, the routers of both data centers can route the traffic to the correct target.

Data center B is also configured to aggregate the X/32 prefixes into a /22, in preparation for the decommissioning of DC A.
This approach makes it possible to advertise the /22 directly on the Internet, without service interruption.

However, a drawback appears: the aggregation command automatically generates a Null0 route for the prefix, with a default weight of 32768. This would override the /22 already advertised by DC A.

The solution is to adjust the weight of the route advertised by DC A (for example by setting it to 40000). In this way, the aggregation will only take over when the routers of DC A are stopped, ensuring a smooth switchover.

Conclusion

This topology allowed the hosting provider to easily move all its servers from one infrastructure to another, while preserving both private and public IP addresses.

A major advantage for technical teams during a data center migration: avoiding changes to sensitive configurations (firewall, VPN, DNS records), which could have considerable impacts during a migration.

Contact us