Hong Kong

LOCATION
Hong Kong

  • Platform
  • Services
  • Technical
  • About
Current
29.76 Gb/s
Peak
76.89 Gb/s
Screenshot 2024 02 26 at 15 46 49

30 years and a broken marriage!

An AMS-IX Story

Tim Vriend

Tim Vriend

Continuous Improvement Specialist

While this year is all about celebrating the 30-year anniversary of AMS-IX, our technical team is working to migrate our Amsterdam network to a new stack of Juniper switches. While this migration is divided into multiple smaller projects, each covering two migration per physical location, there is one which is a bit different than all others.

While this year is all about celebrating the 30-year anniversary of AMS-IX, our technical team is working to migrate our Amsterdam network to a new stack of Juniper switches. While this migration is divided into multiple smaller projects, each covering two migration per physical location, there is one which is a bit different than all others.

History

In 1994, 30 years ago, AMS-IX started as a pilot project with two physical locations, one at Nikhef and the other at SARA on the Science Park in Amsterdam. Over the years, AMS-IX has grown to many more location and the network evolved in many ways, but these two locations have always been our flagships, being among the locations with most traffic and number of ports connected.

Fast forward to 2019, the year of our 25th anniversary, we announced the completion of a massive project where we combined these two locations as if they were a single location. The (lack of) distance between these two locations and the higher density of the newer Extreme SLX switches we were implementing at the time, made it possible to connect them together and act as each other’s backup. This became known as what we like to call “The SuperSite”, with nearly twice the amount of traffic and ports compared to any other location, and this marriage has worked quite well.It saved the implementation of two additional switches, lowering not only investment costs on equipment, but also made our operations more sustainable by eliminating the needed power this equipment would consume and it made a more efficient use of the available port density on the equipment. Fast forwarding another 5 years to the present day, we had to make the tough decision of breaking up this golden combination.

History

In 1994, 30 years ago, AMS-IX started as a pilot project with two physical locations, one at Nikhef and the other at SARA on the Science Park in Amsterdam. Over the years, AMS-IX has grown to many more location and the network evolved in many ways, but these two locations have always been our flagships, being among the locations with most traffic and number of ports connected.

Fast forward to 2019, the year of our 25th anniversary, we announced the completion of a massive project where we combined these two locations as if they were a single location. The (lack of) distance between these two locations and the higher density of the newer Extreme SLX switches we were implementing at the time, made it possible to connect them together and act as each other’s backup. This became known as what we like to call “The SuperSite”, with nearly twice the amount of traffic and ports compared to any other location, and this marriage has worked quite well.It saved the implementation of two additional switches, lowering not only investment costs on equipment, but also made our operations more sustainable by eliminating the needed power this equipment would consume and it made a more efficient use of the available port density on the equipment. Fast forwarding another 5 years to the present day, we had to make the tough decision of breaking up this golden combination.

Breaking a happy marriage

Over the last few years, we have seen an insane growth of ports at this location, mostly in the higher port-speeds, up to the point where the port density of these switches was not sufficient anymore. We already announced we would be breaking up the happy marriage in 2022 as part of the migration project where we would replace all the older Extreme switches with new Juniper MX10K8, but due to global delivery issues, this got postponed until further notice. Mid 2023 we finally received all the required equipment and started with the first migrations, and while AMS- IX was preparing for its 30-year anniversary to come, a part of the team was preparing for what later will become known as “The Monster Project”, splitting up Nikhef and Digital Realty AMS9 (Former SARA).

In a normal migration project, we would migrate all the customers to one of the redundant switches, so we can safely replace the switch that is not handling traffic. After this we would just do the same trick the other way around to finalize the migration, but given the size, complexity, and the age of the “SuperSite” this straightforward approach was not possible. We came up with a new plan to first split them apart, which meant for both locations would run without redundancy at the same time. At AMS-IX we do not like losing redundancy, especially not on two locations at the same time, let alone on the two biggest locations in our network and for a longer period of time. In short, this split needed to be done fast.

Since we knew this split would happen at some point in time, we already prepared all the cabling in a way that would allow us to prepare a lot of the work ahead of time, without touching the existing network, but given the complexity of the connections between these two locations, we took extra time to document each existing connection, making sure we wouldn’t disconnect the wrong cables. At the same time, we were able to install the two new Junipers, which would soon be connected and become part of our production network.

Breaking a happy marriage

Over the last few years, we have seen an insane growth of ports at this location, mostly in the higher port-speeds, up to the point where the port density of these switches was not sufficient anymore. We already announced we would be breaking up the happy marriage in 2022 as part of the migration project where we would replace all the older Extreme switches with new Juniper MX10K8, but due to global delivery issues, this got postponed until further notice. Mid 2023 we finally received all the required equipment and started with the first migrations, and while AMS- IX was preparing for its 30-year anniversary to come, a part of the team was preparing for what later will become known as “The Monster Project”, splitting up Nikhef and Digital Realty AMS9 (Former SARA).

In a normal migration project, we would migrate all the customers to one of the redundant switches, so we can safely replace the switch that is not handling traffic. After this we would just do the same trick the other way around to finalize the migration, but given the size, complexity, and the age of the “SuperSite” this straightforward approach was not possible. We came up with a new plan to first split them apart, which meant for both locations would run without redundancy at the same time. At AMS-IX we do not like losing redundancy, especially not on two locations at the same time, let alone on the two biggest locations in our network and for a longer period of time. In short, this split needed to be done fast.

Since we knew this split would happen at some point in time, we already prepared all the cabling in a way that would allow us to prepare a lot of the work ahead of time, without touching the existing network, but given the complexity of the connections between these two locations, we took extra time to document each existing connection, making sure we wouldn’t disconnect the wrong cables. At the same time, we were able to install the two new Junipers, which would soon be connected and become part of our production network.

At the end of January, while the organization was having a party to celebrate our anniversary, a small team of technicians was preparing themselves for the Monday after, where we would break redundancy and start the work to connect and configure the two new switches. The team was divided in two, one team to work at Nikhef and the other to work at Digital Realty AMS9. The work did go less smooth and fast as we expected, and in the end, we needed a few additional days to get both switches up and running, but on February 10th redundancy was restored at Nikhef and a few days later also Digital Realty AMS9 had its redundancy restored. The marriage was officially broken with each location running all member traffic on the newly installed Juniper MX10K8 and the older Extreme SLX9850 acting as backup in case of need.

At the end of January, while the organization was having a party to celebrate our anniversary, a small team of technicians was preparing themselves for the Monday after, where we would break redundancy and start the work to connect and configure the two new switches. The team was divided in two, one team to work at Nikhef and the other to work at Digital Realty AMS9. The work did go less smooth and fast as we expected, and in the end, we needed a few additional days to get both switches up and running, but on February 10th redundancy was restored at Nikhef and a few days later also Digital Realty AMS9 had its redundancy restored. The marriage was officially broken with each location running all member traffic on the newly installed Juniper MX10K8 and the older Extreme SLX9850 acting as backup in case of need.

We didn’t take a lot of time to recover, as we still had to replace the Extreme switches as well and since Nikhef was running longer on the Juniper already, we decided to migrate this location first, which started on February 12th in the early morning. Here to we experienced some delays due to the number of connections that needed to be replaced, but in the end, we succeeded in finalising the physical migration by Wednesday the 14th of February, followed by configuration work and migrating all customer traffic on February 16th, marking the end of the migrations at Nikhef and only leaving the task of load balancing traffic over the two switches.

The week after completing Nikhef we continued with the final migration of the Extreme switch at Digital Realty AMS9. Given the lower number of connections and the learnings we had from the previous work, we were able to complete all the physical work and testing the next day, February 20th, followed by configurations and migration of member traffic on February 23rd. This day is also the day of load balancing traffic at Nikhef and will be followed by the load balancing of member traffic at Digital Realty AMS9 on March 1st, marking the completion of the project!

We didn’t take a lot of time to recover, as we still had to replace the Extreme switches as well and since Nikhef was running longer on the Juniper already, we decided to migrate this location first, which started on February 12th in the early morning. Here to we experienced some delays due to the number of connections that needed to be replaced, but in the end, we succeeded in finalising the physical migration by Wednesday the 14th of February, followed by configuration work and migrating all customer traffic on February 16th, marking the end of the migrations at Nikhef and only leaving the task of load balancing traffic over the two switches.

The week after completing Nikhef we continued with the final migration of the Extreme switch at Digital Realty AMS9. Given the lower number of connections and the learnings we had from the previous work, we were able to complete all the physical work and testing the next day, February 20th, followed by configurations and migration of member traffic on February 23rd. This day is also the day of load balancing traffic at Nikhef and will be followed by the load balancing of member traffic at Digital Realty AMS9 on March 1st, marking the completion of the project!

New tech and greener!

Next to this split and the double migration being an achievement on its own, it also marks the beginning of new technology and improved sustainability!

First, the older Extreme SLX switches did use a little bit less power compared to the double Juniper MX10K8 setup we have now, but those ran out of capacity to grow meaning we would need to double up soon. In short, the power usage of the inevitable setup based on the Extremes, would have used around 50% more power compared to the Juniper setup we are now running, resulting in a saving of around 8kW per hour.

On top of this, already tested on a smaller location, both Nikhef and Digital Realty AMS9 have been upgraded with 400Gbit point-to-point backbone links to our core locations. The total installed capacity for these two locations is a whopping 17.6 Tbit of backbone capacity! Half of this capacity is connected via 400Gbit-LR4, while the other half needs to travel a longer distance and uses 400Gbit-ZR+ (High Power) optics on passive (de)muxes.

Because of these point-to-point connections, we eliminated the need for the Huawei OSN902 DCI equipment, lowering our power consumption with around 12kW per hour for backbone connections alone.This means a total power saving of around 20kW per hour for these two locations, which calculates to a massive saving of around 175 megawatt per year!

Needless to say, we are extremely proud of the achievements made during this project and will continue our efforts to improve our sustainability during the following migrations!

Again, I take my hat off and bow to the team of professionals who performed this migration of which you can see the quotes at the end of this article!

New tech and greener!

Next to this split and the double migration being an achievement on its own, it also marks the beginning of new technology and improved sustainability!

First, the older Extreme SLX switches did use a little bit less power compared to the double Juniper MX10K8 setup we have now, but those ran out of capacity to grow meaning we would need to double up soon. In short, the power usage of the inevitable setup based on the Extremes, would have used around 50% more power compared to the Juniper setup we are now running, resulting in a saving of around 8kW per hour.

On top of this, already tested on a smaller location, both Nikhef and Digital Realty AMS9 have been upgraded with 400Gbit point-to-point backbone links to our core locations. The total installed capacity for these two locations is a whopping 17.6 Tbit of backbone capacity! Half of this capacity is connected via 400Gbit-LR4, while the other half needs to travel a longer distance and uses 400Gbit-ZR+ (High Power) optics on passive (de)muxes.

Because of these point-to-point connections, we eliminated the need for the Huawei OSN902 DCI equipment, lowering our power consumption with around 12kW per hour for backbone connections alone.This means a total power saving of around 20kW per hour for these two locations, which calculates to a massive saving of around 175 megawatt per year!

Needless to say, we are extremely proud of the achievements made during this project and will continue our efforts to improve our sustainability during the following migrations!

Again, I take my hat off and bow to the team of professionals who performed this migration of which you can see the quotes at the end of this article!

Words from Team AMS-IX

Leroy: “What a challenge, what an achievement!”

Michael: “A project like this gives you a great sense of responsibility and pride at the same time! What a team!”

Mohamed: “One of the most drastic and stressful migrations in my career, but super enthusiastic with the results! Up to the next migration!”

Stavros: “It was a long and eventful migration that exhausted our energy. However, we managed to stick to the deadlines and complete it, while at the same time our machines were reaching a new all-time peak record. Couldn’t be more challenging than that.”

Karol: “For the first time in my career I was involved in a project that required connecting so many fiber optical cables, at first it was bit overwhelming but final result gave me a lot of satisfaction.”

Stavros: “@Tim You remember 6 years ago when we broke the record of Backbone expansions to 4 hours only instead of weeks? I believe now we can do it in MAX 1 hour”

Tim: “Pff... With a team like this... Peanuts!”

Words from Team AMS-IX

Leroy: “What a challenge, what an achievement!”

Michael: “A project like this gives you a great sense of responsibility and pride at the same time! What a team!”

Mohamed: “One of the most drastic and stressful migrations in my career, but super enthusiastic with the results! Up to the next migration!”

Stavros: “It was a long and eventful migration that exhausted our energy. However, we managed to stick to the deadlines and complete it, while at the same time our machines were reaching a new all-time peak record. Couldn’t be more challenging than that.”

Karol: “For the first time in my career I was involved in a project that required connecting so many fiber optical cables, at first it was bit overwhelming but final result gave me a lot of satisfaction.”

Stavros: “@Tim You remember 6 years ago when we broke the record of Backbone expansions to 4 hours only instead of weeks? I believe now we can do it in MAX 1 hour”

Tim: “Pff... With a team like this... Peanuts!”

Subscribe to our newsletter

Got a question?