Singapore

LOCATION
Singapore

  • Platform
  • Services
  • Technical
  • About
Current
15.42 Gb/s
Peak
87.69 Gb/s
Website Story Image Dci Implementations Final Link 2

From drawing board to production: DCI implementations - the final link

An AMS-IX Story

Tim Vriend

Tim Vriend

Continuous Improvement Specialist

Back in 2015, AMS-IX tested a series of DWDM Equipment under the name of Data Centre Interconnect (DCI) from different vendors in an effort to optimise the Core network.

On September 1st 2016, Henk Steenman (CTO at AMS-IX) published a blogpost about choosing the Huawei OSN range. As Henk states in his blog: "DCI solutions have two great benefits for us and our members and clients. First of all, this solution will bring down implementation time for additional bandwidth. Secondly, it will help us reduce costs in comparison to the use of our current solution: dark fibre."

Back in 2015, AMS-IX tested a series of DWDM Equipment under the name of Data Centre Interconnect (DCI) from different vendors in an effort to optimise the Core network.

On September 1st 2016, Henk Steenman (CTO at AMS-IX) published a blogpost about choosing the Huawei OSN range. As Henk states in his blog: "DCI solutions have two great benefits for us and our members and clients. First of all, this solution will bring down implementation time for additional bandwidth. Secondly, it will help us reduce costs in comparison to the use of our current solution: dark fibre."

First in the field

Early October 2016, we received the first delivery of the Huawei DCI equipment, which was quickly unpacked and tested. During this week we started playing around with the configurations and create documentation for installation and configuration procedures.

Shortly after, a delegation of Huawei Engineers came to support us with the installation, configurations and at the same time some initial training on the equipment and the troubleshooting. During this week, we also found some improvement points, mainly on the firmware, which were dealt with and pushed into firmware updates. We were amazed with the speed of these improvements. Every time we found a bug or if we suggested an improvement, the engineers would contact the R&D department in China and a new firmware update was sent with the fix. Often on the very same day!

The first sites we equipped with the DCI equipment were Equinix AM3, Equinix AM5 and Nikhef, as these had the highest backbone loads at the time and would be the first candidates for extra capacity.

On October 14th 2016, after installing and testing the new links for several days, the first backbones at Equinix AM3 were migrated to the newly installed Huawei OSN range and started pushing traffic without any major issues. First only to a single core switch, and shortly after to all core switches. The week after we also migrated Equinix AM5 and Nikhef to the OSN Range, again without any major issues. We did run into some smaller issues during the installation process, mainly due to the fact that we had TX/RX swaps in the dark fibre network, but we quickly learned how to deal with these in a quick and efficient way and adjusted our documentation to reflect these procedures as well.

Over the next few months we equipped multiple locations with DCI, and as expected performed our first backbone upgrade in one of these. We also added a new PoP location at Digital Realty AM1, which was equipped with the DCI from the start.

First in the field

Early October 2016, we received the first delivery of the Huawei DCI equipment, which was quickly unpacked and tested. During this week we started playing around with the configurations and create documentation for installation and configuration procedures.

Shortly after, a delegation of Huawei Engineers came to support us with the installation, configurations and at the same time some initial training on the equipment and the troubleshooting. During this week, we also found some improvement points, mainly on the firmware, which were dealt with and pushed into firmware updates. We were amazed with the speed of these improvements. Every time we found a bug or if we suggested an improvement, the engineers would contact the R&D department in China and a new firmware update was sent with the fix. Often on the very same day!

The first sites we equipped with the DCI equipment were Equinix AM3, Equinix AM5 and Nikhef, as these had the highest backbone loads at the time and would be the first candidates for extra capacity.

On October 14th 2016, after installing and testing the new links for several days, the first backbones at Equinix AM3 were migrated to the newly installed Huawei OSN range and started pushing traffic without any major issues. First only to a single core switch, and shortly after to all core switches. The week after we also migrated Equinix AM5 and Nikhef to the OSN Range, again without any major issues. We did run into some smaller issues during the installation process, mainly due to the fact that we had TX/RX swaps in the dark fibre network, but we quickly learned how to deal with these in a quick and efficient way and adjusted our documentation to reflect these procedures as well.

Over the next few months we equipped multiple locations with DCI, and as expected performed our first backbone upgrade in one of these. We also added a new PoP location at Digital Realty AM1, which was equipped with the DCI from the start.

Speeding up backbone upgrades


In February 2017, during a backbone upgrade, one of the biggest advantages became clear. Before we had the DCI, any implementation that involves new dark fibres would take multiple months to be implemented due to delivery times of the dark fibres. Now, with the DCI in place, it took us 2 days to perform the full upgrade and without any issues.

One year later, on February 2nd 2018, this 2 day record was crushed in an attempt to do it even faster and we managed to double up the capacity at Nikhef in a whopping 4 hours! Details on this stunning event have been blogged by Stavros, one of the involved NOC Engineers and can be found here.

Speeding up backbone upgrades


In February 2017, during a backbone upgrade, one of the biggest advantages became clear. Before we had the DCI, any implementation that involves new dark fibres would take multiple months to be implemented due to delivery times of the dark fibres. Now, with the DCI in place, it took us 2 days to perform the full upgrade and without any issues.

One year later, on February 2nd 2018, this 2 day record was crushed in an attempt to do it even faster and we managed to double up the capacity at Nikhef in a whopping 4 hours! Details on this stunning event have been blogged by Stavros, one of the involved NOC Engineers and can be found here.

14 PoP’s in 24 months

Less than 2 years after starting this implementation, on the 2nd of August 2018, the last PoP location in the Amsterdam Area has been equipped with the Huawei OSN Range, finalising this massive migration. A true moment of pride for the entire AMS-IX family!

To reflect what we did during these 22 months:

  1. Migrated a total of 12 PoP Locations
  2. Installed an additional 2 PoP Locations
  3. Performed 8 backbone upgrades
  4. Reduced dark fibre usage from 288 pairs to 30 pairs
  5. Partially migrated management network to DCI connections
  6. Partially migrated from LR4 to SR4 optics

In the end, a lot of costs have been saved by reducing our dark fibre usage by 90%, the ability to use SR4 optics and the reduction of time spent on expanding our core network.

To visualise it all, below is our Network Topology of 2015:

14 PoP’s in 24 months

Less than 2 years after starting this implementation, on the 2nd of August 2018, the last PoP location in the Amsterdam Area has been equipped with the Huawei OSN Range, finalising this massive migration. A true moment of pride for the entire AMS-IX family!

To reflect what we did during these 22 months:

  1. Migrated a total of 12 PoP Locations
  2. Installed an additional 2 PoP Locations
  3. Performed 8 backbone upgrades
  4. Reduced dark fibre usage from 288 pairs to 30 pairs
  5. Partially migrated management network to DCI connections
  6. Partially migrated from LR4 to SR4 optics

In the end, a lot of costs have been saved by reducing our dark fibre usage by 90%, the ability to use SR4 optics and the reduction of time spent on expanding our core network.

To visualise it all, below is our Network Topology of 2015:

And here you can see our current Network Topology:

And here you can see our current Network Topology:

Subscribe to our newsletter

Got a question?