13 May 2025
RIPE 90
2:00pm
Main Hall
FRANZISKA LICHTBLAU: Hello. Good afternoon everyone. I hope you enjoyed your break. My name is Franziska, this is Kevin, he come from the Programme Committee and we are going to be your session chairs for this next one and a half hours.
First, speaking of the Programme Committee, if you want to work with us on creating amazing interesting RIPE plenary agendas, you can raise your name on the RIPE PC list until the end of the session, so until 3.30, you can volunteer to run for one of the two seats that will be open for this round. If you are interested have a look at the website, it's all detailed out there.
And let's get started, so we will have two full presentations, they will focus on the general situation of infrastructure in internet in Ukraine and two lightning talks in this round. So our first speaker is Antonios Chatzivasileiou. He is a research assistant at the university of Crete and I C forth, his research is focusing on interdomain routing and infrastructure and AS relationships with a special interest on infrastructure resilience and here today he will talk about the whole Ukraine conflict and how this has affected network peering in Russia and Ukraine using BGP and DNS data as referenced. So welcome Antonios.
(APPLAUSE.)
ANTONIOS CHATZIVASILEIOU: So, hello everyone, my name is Antonios and I am going to present you our working how Russia's invasion of Ukraine impacted the internet peering of the conflicted countries, this is a just a minute work with others.
First of all, the timeline of war. As you will know, the conflict started when Russia invaded Crimea on February 2014. And several years later, USA reported that Russian and Belarus forces are mobilised near the borders of Ukraine.
And two months later, Russia invaded Ukraine.
Now the invasion initially happened to towards Kyiv, the capital of Ukraine, but after two months of trying Russia shifted from north to towards the southeast part of Ukraine.
So some related work. Douset reported rerouting of traffic from Ukraine to Russia based cysts when the invasion in Crimea happened.
The impact of the first three months of the war was studied on rouse and latency and crowd flare defected traffic patterns an high numbers of DDoS attacks, especially here Cloudflare detected there there was an increased amount of traffic on the west part of Ukraine and that was due to the people trying to move from the east part to west in order to save their lives.
Finally MANRS also reported DDoS attacks and potential BGP hijacking events in the region.
Now, the focus of our paper and work, we studied the impact of the conflict on internet peering for the period of April 21 until January 25 in three month intervals. I have an asterisk here in January 25 because some lots as you will see, especially the ones with validation are October 2023.
So we are going to see the AS organisation country changes from Ukraine an to Russia and vice versa. The AS churn of for ASes in the fats of the two countries.
The actual status of Ukranian IXPs, we are going to validate, we are going to try to validate the data sources and finally see the AS relationships between the countries.
The dataset we are going to use are the AS to organisation data for both countries, Ukraine and Russia. The AS members in IXPs and facilities. Traceroutes, this are going to be used to validate the IXP status, this happens only for Ukraine. And also AS relationship data.
How we collect our dataset, for the AS organisation data, we combined the data sources from RIPE state for the routed ASes only. For the IXP dataset, we retrieved the data from PeeringDB, Packet Clearing House and Electric. It's worth noting these three databases do not provide historical data but CAIDA does, so every few months it collects the data from these three data sources and also for the co‑location facilities, CAIDA also has historical data for PeeringDB only, finally we used RIPE Atlas to get the traceroutes in order to validate the activity of IXPs.
So at first we are going to see the increase number of AS changes, first of all, we showed that both for Ukraine and Russia the number of ASes registered as Ukranian and Russia got decreased in the period of April 2021 until 2025.
But more interesting is that many ASes, especially the period of October to 2022 until April 2023, changed their country code from Russia to Ukraine and vice versa.
And this is pretty strange because most of the ASes didn't change the organisation name. So I wonder how easy is it to change the country code of an organisation.
Moving on. So here we can see the infrastructure of Russia, so basically for every time stamp, we have the number of foreign ASes that are connected to Russian IXPs facilities. What is interesting is that from April 2021 until the last time stamp, which is December 2023, the number of foreign ASes that are inside the Russian infrastructure got decreased.
But even more interesting is that we have big number of disconnections on April 2021 until July of 2021 and this number consists of 54 ASes that disconnected which is 7.4 times above average. And 53.7% of these disconnections were Ukranian. Even more interesting is this time stamp from October 2021 until January 2022, so in this time stamp, which is, as we noted before the invasion happened, we have a massive amount of ASes disconnected from Russian infrastructure. 133 ASes disconnected, which is 18.3% above average and 64.6% of these are Ukranian.
Most of the other ASes were European ASes.
And finally we tried to classify these disconnected ASes and we found that 76% of these were ISPs and 5% cloud providers.
So we did the same, of course, for Ukraine. Now, for Ukraine, again, we can see that the time stamp right before the invasion, we have a big number of disconnections, specifically 45 ASes disconnected, which is 6.3% above average and 82% of these ASes are Russian. And again this time stamp is before the invasion.
And the next time stamp, which is during the invasion, so from January 2022 until April, we have 60 ASes that got disconnected and 92% of these ASes were Russian, again the classification is about the same as the previous one.
So we have this data and they show something very interesting, but how sure are we that this data are actually valid. For example, we know for sure that some parts of Ukraine got bombed and many buildings got destroyed so we tried to visualise to see where these IXPs in Ukraine are basically. Most of the IXPs are located in Kyiv or Crimea, but there are some IXPs that are very close to Russian borders and we know that these area were heavily bombed. So for the IXPs of cloud IX in Krakow and MESH‑IX and cloud IX, we will would like to see where ‑‑ we see they were indeed active in that time stamp and if there was disconnections.
So we mailed to every Ukranian IXP in order to find out if it was off line and for how long.
We also asked if there was able to get the list of the AS members of that IXPs. Of course no one answered. The last thing that these people had in mind was to answer any mail from a student. But lucky for us one IXP responded, this was the MESH‑IX and they informed us that the building was destroyed on March 22.
So from March 2022 and until the last time stamp, this IXP was off line. The problem is for us that in our datasets, the MESH‑IX seemed to be online and that's a problem. Because if this is off‑line, how do we know that any other IXP is also off‑line and to have a dataset that's valid? Lucky for us, MESH‑IX is a very small IXP with 327 AS members, so even if it goes off‑line, the impact in our results won't be so important.
But even so we would like to validate our sources. The first thing we tried to do was to collect some satellite images for the location of that IXPs, but we couldn't get a good number of photos per time stamp in order to have a nice sequence.
So the other way we tried was to retrieve all available traceroutes from RIPE Atlas for the first 15 days of each time stamp. And for each traceroute, if we found IXP's IP in the past, we saved the source and the destination IP of it. So for example here, we have this traceroute and we matched this IP with this from the Giganet IXP, so we are saving our peer list, the source destination of that route and we collect all traceroutes that have source destination IP from that list.
And now here if no IXPs IP found on these traceroutes for other time stamps, then we consider the IXP is maybe inactive for that time stamp.
And we say maybe because we cannot be sure, because if you don't see the traceroute, it doesn't mean it doesn't exist. But if we see it on the traceroute, we know for sure that that IXP was active from that time stamp.
So based on this information, we got this table so here are all the Ukranian IXPs, sorted from the ones that are closest to Russian borders. And we can see that with the green blocks are the IXPs that are matched within IXP IP for the time stamp. So we know for sure that the IXP is active on that time stamp. But the red ones we cannot be sure about it. If you see a red block for an IXP for a time stamp, that means that we have detected a peer of route that matched that IXP before or after but not for the specific time stamp.
The great blocks are for the IXPs that do not exist in the specific time stamp in your datasets and the dark red are the IXPs that existed but we didn't match any traceroute on them.
Now, if we see this table, we can see that MESH‑IX, we know for sure that got destroyed in March 2022, is actually off‑line until the last time time stamp.
But for the other three IXPs we cannot say.
Luckily for us, these three IXPs which are closest to Russia and they appear to be off‑line are very small in size, so they don't affect much our results. And luckily for us, the big IXPs of Ukraine, so the ones that hold the majority of autonomous systems, appear online for example Giganet,Dtell IX and so on.
Also here you can see that Giganet IXP after April 22 got split into three different IXPs and the prefixes for that IXP got only to this one, that's why these two are not, there's no prefix.
And after that, we validated that the IXPs were valid, we needed to do the same with the AS members and in order to do that, we tried to use PTR record, basically PTR record is something like this and here this PTR record indicates that a router is registered in Giganet in Ukraine and is connected to a network named tenet.
So for each Ukranian IXP, we retrieved all the available PTR records which was around 1500 and we manually matched the organisation name or AS name to the PTR.
This was, yes, it was hard. After this validation, we came to this table, this is the five biggest IXPs of Ukraine. And here we can see how many ASes we got validated out of the ASes that our datasets indicated that they exist inside.
So we got an accuracy of around 75% which is actually pretty good if you think about that, we only took a town linked PTR records linked to AS in numbers when it was absolutely clear that that PTR record is connected to that ASN, we discovered many links that probably will be ASN to PTR record match.
Moving on, here we decided to also retrieve the AS relationship data, so from CAIDA, we retrieved for both countries the relationship data, for peer to peer provided to client and we show about the same results here. So in the first plot, the red line is a number of for peers, number of for ASes peer with Ukranian AS and we can see that after October 2022, it almost treble ‑‑ tripled in size, that means that probably many other ASes peered with Ukranian ASes in order to help with connectivity. But at the same time we see that the percent ‑‑ and I think of Russian peers to for peers got also decreased.
Now for the Russian ASes here, the red line again indicates a number of foreign peers so Russia and we see that the number of peers got decreased over the time so Russia had around 3,500 peers before the invasion and around July 2022, it got around 3,000.
And we can see that, we have a slide decreasing the percentage of Ukranian peers out of that number. And moving on, this is the client provider, provider to client plot. So here we can see again that the red line is the number of for ASes that are clients to Ukranian providers and we can see that also this number got decreased from 160 to around 100. But also we have a massive decrease in the percentage of that number from Russian ASes.
And for the final plot, here we can see something very strange, so for the number of clients that are clients to Russian providers, we have the number got decreased until October 2023, but then you can see a very big increase in that number of clients and also the percentage of Ukranian clients to Russian providers stays around stable for the most part of the plot.
So yeah, in conclusion, depeering activity took place months before the invasion and continued during the first months after the invasion.
Parts of peering infrastructure in eastern Ukraine were destroyed and lost connection until today.
Peering between Russia and Ukranian networks were significantly impacted.
And increased number of country changes for the conflicted countries.
So, thank you very much, I will be happy to answer any questions or you have any suggestions for my work, I will be pleased to hear it. (APPLAUSE.)
FRANZISKA LICHTBLAU: Thank you.
Do we have any questions in the room? We don't have anyone in the queue. I would be curious what is ‑‑ I mean, this paints us a picture, but do you plan to do any follow up work, maybe some active measurements, to see how things develop in that direction? Or did you have a look at that already?
ANTONIOS CHATZIVASILEIOU: We don't see active measurements at the moment, but right now we try to correlate this and try to see if there's any connection to this infrastructure changes to geopolitical ones so yeah, for us in Ukraine, it's pretty obvious but for other countries that may have increased peering or decreased depeering for other countries, it may not be so obvious.
FRANZISKA LICHTBLAU: That makes a lot of sense, especially with your finding that the depeering actually started way ahead, it could be interesting to expound that view. Thank you so much.
ANTONIOS CHATZIVASILEIOU: Thank you very much.
(APPLAUSE.)
FRANZISKA LICHTBLAU: Next on the agenda we have a remote presentation that the presenter nicely already recorded for us, except in case something goes wrong. So if you have a close look at the list of IXPs that you saw in the previous presentation, the presenter we are hearing now, it's Pavel, and he is CTO at 1‑IX and that was one of the IXes from that measurement, what he is presenting on is actually experiences on developing distributed ISPs during the Ukraine war situation while dealing with physical attacks, power blackouts and an insane amount of resource shortage during the war. So let's have a look at his presentation.
PAVEL KOROTEEV: Hello, good afternoon, my name is Pavel, I am from 1‑IX company, this is a network resilience and experiences of survival and development during the war in Ukraine.
So what is the 1‑IX now? It is 20 cities across ten countries. 240,000 prefixes inside the network. Double or triple the redundancy of everything, of equipment, pops, data shared nodes, everything. More than 30 data centres. 170 or even 1040 and ‑‑ 140 or the 0 participants now. More than three terabytes of traffic now.
What were the main challenges for our company, our project? I will try to describe it in a few words in a couple of minutes.
Physical destruction of the infrastructure and electricity blackouts, so this lead to inability to rely on the stable operation of Tier 1 and Tier 2 providers. Lack of resources, reduction in inventory and inability to replenish it, broken logistics, limited funding and investment. There were challenges on both supply or by logistics chains and wire house of the provider to pop logistics chains.
Cyber attacks. Escalation in both the frequency and impact of targeted attack.
Telecom operators are also among the primary targets of cyber attacks, along with financial institutions, energy companies, government services and registries.
On December 12, 2023, Kyiv Star, the company's largest telecommunications company, was hit by a massive cyber attack that knocked out mobile and internet services used by millions of people and disrupted air raid siren systems in parts of Kyiv. Overreliance on government and international support, grants may be delayed or insufficient during emergencies. Human factor, the most critical one, shortage of skilled personnel, no interchangeability.
I would like to invite you to watch a short feature about how Ukranian telecom operators and the government are working every day to maintain connectivity and the real challenges they face in doing so.
(Video plays)
February 24th, 2022, a day that forever changed Ukraine.
Every city and village, every family and business found itself under the impact of war. It brought not just physical destruction but forced society to adapt to a harsh new reality run by explosions and air raid sirens.
One of the most critically important sectors during the war became telecommunications infrastructure. Connectivity emerged as one of the most vital resources for effective military operations.
It was essential for co‑ordination between units, rapid transmission of commands and information, controlling drones and providing aid in crisis situations. For civilians, communication became a lifeline. It allowed people to stay in touch with family, even if they remained in conflict zones or fled abroad. It facilitated remote work on, online education and access to reliable information amidst propaganda and chaos.
Telecommunications workers recognising the significance of their mission began operating under extremely dangerous conditions. They repaired networks under fire. Established autonomous power sources to sustain critical nodes. And dispute losing revenue due to shrinking customer numbers, continued their efforts for Ukraine's benefit.
Every day became a battle for connectivity because communication meant life, safety and hope for millions of Ukrainians.
These are the heroes of the at the forefront of Ukraine's battle for connectivity.
The director of an internet provider who has been maintaining connectivity for the community and soldiers since 2014. When the city was liberated from occupation, dispute constant shelling.
A representative of one IX, as distributed internet traffic exchange point that connects major operators, enhancing efficiency, connectivity, reliability and accessibility of the internet.
The owner and CEO of an internet provider who has transformed 60% of his network to a different technology due to energy supply issues.
A member of the Ukranian Parliament who chairs the committees responsible for digital transformation, defence, and cyber security.
The director of the largest fiber supplier for Ukranian telecom operators and chairman of the internet association of Ukraine representing hundreds of electronic communications operators.
The manager responsible for the operation of the main distribution network of 1‑IX overseeing the central and southern regions, some of which were under occupation and are now liberated.
These stories are not fiction. They are real. And the lessons learned are invaluable.
A thousand times more valuable to hear than to experience firsthand.
Friends, I understand that right now it may seem that the war is far away in the Ukraine and it will never reach you. You might think that if rockets start falling outside your window, and the sound of machine gunfire becomes a reality, connectivity will be the last thing on your mind. But trust me, connectivity will be the first thing you think of. You will want to know where your loved ones are and what is happening. Connectivity will be the first thing you remember as an operator, because it will become your weapon, your means of protecting yourself and your country. Back in February 2022 we lost connection too, our employees were scattered across Ukraine, cut off from each other. We decided to create a kind of back‑up internet to provide at least basic messaging, television and communication for our families, our citizens and our entire country. We created 1‑IX. In two years of full scale war, we now manage around 150,000 prefixes and have distributed across 50 independent data centres and administer seven cross‑border connections.
We all know the golden rule of network administrators, if it works, don't touch it. But believe us, it's far better to plan upgrades, deploy new uplinks and expand your infrastructure when it is quiet, when no explosions are heard outside, and when there are no emergencies. Each new provider is a new experience, an opportunity to learn how to enhance and protect your network.
We strongly recommend holding network security days once a month or once a quarter, testing new providers, connecting to new exchange points, trying new data centres and establishing new routes.
Unfortunately the city of Kramatorsk has been suffering from the Russian invasion from 2014, from April to July 2014, the city was under occupation, and in 2015, it was struck by MLRS rockets, causing numerous deaths and injuries, since full scale invasion in February 24, 2022, the city has been bombarded with dozens of rockets, infrastructure has been destroyed and lives have been lost. Our company once served nearly 15,000 subscribers. Due to forced evacuations and ongoing hostilities, we lost over 50% of our clients. Even as the cost of maintaining and restoring the network under shelling skyrocketed. Behind me stands the hotel here which once housed one of our main technical sites.
A rocket struck right next to the building, damaging all our main communication lines and equipment. It took more than two days to restore connectivity, since February 24th, 2022, we have restored over 400 kilometres of optical backbone and client networks.
In terms of backbone channels, we were better prepared this time. In to 2014, we had only two independent internet channels. By 2022, fearing the outbreak of hostilities, we had six. Yet even this was not enough to maintain connectivity. Just 18 kilometres behind me is the front line. Unfortunately the clothing I wear has become a uniform for our technicians who must restore connectivity after each attack.
Over this period, we have laid more than 80 kilometres of new underground backbone lines to our technical sites ensuring independent redundant routes in case of damage. We have also purchased dozens of powerful batteries and inverters to power our sites.
More over, we are building a new optical network using home technology in the city's high rise buildings to keep the city connected.
Currently we are in an underground shelter, protected by nearly a metre of reinforced concrete.
This is where we now have to relocate our equipment to shield it from shrapnel and missile strikes. I sincerely wish our experience was unnecessary for anyone. But war is a terrible thing.
Here are a few tips that might help an operator in a crisis situation.
Reserve everything you can. Have backup equipment in different locations. Secure as many internet channels as possible. Even two or three may not be enough during active hostilities.
Keep a large stock of materials for repairs, especially when courier services cannot operate.
Invest in energy efficient technologies like PON and new energy saving and generation solutions.
Remember, your main resource is your employees, leadership must stand with them during crises.
Founded in 2002, at are a come occupies a critical position in Ukraine's telecommunications sector, it specialises in backbone fibre optic communication lines.providing both its own and third party network maintenance. The performance of these lines directly impacts the operation of nearly every telecommunications company in Ukraine.
Before the war, we carefully analysed potential risks and implemented a strategic programme. The first and foremost concern was the safety of our personnel, physical safety. We didn't know where the rockets would fall, where explosions would occur or where mines might be. Our employees travel across the country, restoring communication lines, constantly exposed to these dangers.
To protect them, even before the war, we provided tactical medical training, equipped them with specialised first aid kits and for those in high risk areas, we supplied helmets and body armour.
Another critical risk we foresaw was personnel loss due to mobilisation. Companies like Articom are essential for the state's functionality. And our employees who maintain backbone lines must remain in their roles. Securing military exemptions for those employees was a necessity and though it was challenging we gradually resolved these issues but the challenges we faced during the war were beyond what any one organisation could handle alone.
Solving them required effective co‑operation between state authorities, central, local and businesses. I am the head of the line department for 1‑IX in Kyiv, my team and I are responsible for maintaining the backbone and distribution networks here.
We manage highspeed transit channels running from the centre to the south for telecom operators as well as distribution networks for businesses and residential users.
When the war began, we immediately faced severe problems with the backbone lines. Regional network nodes were major inter providers and mobile operators were concentrated came under direct missile strikes. Many of these vital points were completelily or partially destroyed as the region saw intense fighting with heavy artillery shelling, the local population built defensive structures, trenches, bunkers and fortifications,.
Excavators, tractors and shovels dug up the ground everywhere, often severing our fibre optic lines multiple times a day.
We found ourselves in a situation where our main lines were either damaged physically or went off‑line due to power outages.
To ensure the stable operation of our network, we made a strategic decision to relocate and duplicate all major backbone sites. Allow me to show you one example of our experience, moving backbone equipment to a safe location where the risk of missile strikes is minimal and where we are entirely independent of the power grid.
Here we stand in one of our key locations, a regular building that now houses our passive backbone communication set up, all fibre optic cables including backup routes laid along alternative paths converge here. We dug several stretches around 10 kilometres long connecting to two different main communication channels. By passing all primary backbone points to create an independent site. We also relocated some our active equipment here, those that survived missile strikes. The remaining equipment was distributed among similar secure locations, since no technical manuals exist for such a situation, we had to make decisions on the fly.
So what did we achieve. The minimum required equipment fits in three telecom cabinets. Our
DWDM systems deliver one terabyte per second, while the current traffic on this equipment is around five hundred gigabits per second.
It can operate without electricity, our fuel reserve lasts for months. It can run on solar energy for at least ten hours without fuel or grid power. Only two technicians are needed for one hour daily to maintain this site.
Inside, this site meets all the latest EU data centre standards, in current conditions, it can operate for at least six months without electricity.
Autumn, 2022 marked another turning point as massive attacks on Ukraine's energy infrastructure resumed, the term blackout returned to daily vocabulary.
Across many regions, emergency and planned power outages became commonplace, both major cities and small towns experienced extended periods without electricity, in some areas, power was supplied for only two to four hours followed by longer outages, to cope with the new challenges caused by widespread power outages, Ukranian internet providers had to reconsider their internal structures.
Many companies established specialised units dedicated exclusively to maintaining stable network operations during crisis conditions. Their tasks included analysing, purchasing, testing and upgrading equipment particularly batteries and uninterruptible power supply systems. The key requirements were fast charging, extended autonomous separation and adaptability to unstable voltage, it became a new direction of activity that previously did not require such extensive efforts. Our engineers and specialists urgently needed a solution and we found it. We developed custom circuits and upgraded power boards in network switches, electricity was on for four hours and then off for four hours. Our main task was ensuring the batteries could fully recharge within these four‑hour windows so the devices would reliably operate during subsequent outages.
At one point we even started importing battery cells from abroad as they charged faster. We purchased battery management systems, BMS, and assembled batteries ourselves directly in our company's office.
Technical upgrades to charging stations and imports of lithium am batteries allowed us partially stabilise telecom performance, these measures couldn't guarantee stability during sudden changes in schedules, just a slight extension in the power outage from the planned four hours to six, could push network nodes to the brink of failure.
To fundamentally solve this problem, some operators made a strategic decision. Transitioning from active FTTB networks to passive optical networks, POM, due to the absence of active equipment between nodes and users, these networks consume minimal energy and are significantly more resilient during prolonged blackouts, although this transition is technically challenging and costly, it greatly reduces dependence on independent power supplies. We significantly accelerated the deployment of PON networks.
If at the start of the power outages roughly five to ten percent of our network was built using PON technology, over the next two years, we tripled that percentage of users. When I first joined Parliament, I worked in the digitalisation committee, where we brought about a revolution in our country.
With the full scale invasion, I joined the defence committee, because cyber security and cyber defence are now among the most critical issues.
We experience kinetic attacks on our infrastructure firsthand. Behind me, you can see how Ukraine is now filled with generators. Our operators and service providers withstood this kinetic assault.
Some operators ceased operations but most especially small and medium businesses survived and continued working, why? The key was having a three layered power redundancy system from uninterruptible power supplies to multiple generators and multi‑layered redundancy of land line communication channels. We have a unique number of telecom operators, over 2,000, thanks to this diversity, I cannot recall a single instance where fixed communication for Ukranian citizens disappeared completely.
Today nearly every Ukranian is fighting, maintaining the electronic communications sector is a constant challenge but we have implemented a system for reserving key technical staff and emergency recovery teams so that even during full scale war the digital communications sector continues to function, but our soldiers, our men, are not increasing in number and unfortunately this prolonged war drags on, I don't want to make predictions but it may only be a matter of time before Ukrainians are no longer enough, and then you will have to defend your own.
Massive shelling, energy instability, logistical challenges and shortages of quality equipment became a true stress test for Ukraine's telecom industry but these very conditions forced operators to quickly adapt, create new specialised departments, reequip networks, transform access architecture and reserve critical resources, the cost of this adaptation is enormous, but the results are gradually becoming evident. More and more nodes keep operating even in darkness. The number of passive optical networks is steady increasing, back up channels are becoming standard, making connectivity more reliable, the Ukranian internet is learning to survive war time conditions, becoming one of the most resilient networks in the world.
(End of video)
PAVEL KOROTEEV: Based on everything we learned through extensive conversations with many partners in Ukraine, we have compiled some valuable insights which I am now going to share with you exclusively.
Main conclusions. What would we strive for. Or would ‑‑ what should we strive for.
Redundancy of at least three times for uplinks, traffic paths, PoPs, data centres, servers and all critical components.
Minimum inventory of required equipment and materials sufficient for two or three months. This is minimum. Absolute minimum. The planning process must guarantee the rise interchangeability so from one vendor to another, autonomous power supply, backup generators and fuel reserves are critical. Give priority to energy efficient technologies. Keep in mind the need for equipment, cooling and temperature limits.
Secure off‑line backups. Every network operator should develop a system of secure backup isolated from the network. A disaster recovery plan with all necessary details is essential, taking into account the presence or absence of staff.
Don't rely solely on financial aid. Emergency spare parts on site are more valuable than waiting for external funds.
Human factor is key. Scout for talent, allocate resources for training programmes, establish interchangeable job roles and develop co‑operation frameworks with public and military sectors. We connect networks to increase the efficiency, reliability and availability of internet resources, providing a direct connection between CDNs, content providers, transit providers, eyeball networks.
Enhance efficiency and reliability, direct data exchange with leading operators and content providers.
Higher value and security. Double or triple reservation, minimise downtime risks.
Flexible BGP management, optimised traffic routing and exchange based on your needs.
More than 24/7 support, expert assistance, rapid response and additional consultations for network management.
We greatly appreciate any support, we route traffic to and from Ukraine and we need peers. Therefore we kindly ask our colleagues in the industry to tier providers, content providers, transit and eyeball networks to respond and get in touch with us. By doing so, you will help us fulfil our mission and support Ukraine.
Thank you very much. Very proud to be here, to share the knowledge. Thank you very much for all the internet community, providers community, internet exchange community, for the maintainers of the OpenSource programmes and systems. See you soon. Bye.
(APPLAUSE.)
SPEAKER: OK. So thank you very much to Pavel, you will understand why this was recorded as video. I do believe he is online and we can take some questions if there are any from the audience. Actually I just want to ‑‑ I just want to check, yes please, by the way, Pavel, can you hear us? .
PAVEL KOROTEEV: Yes, yes, I am here thank you.
AUDIENCE SPEAKER: My name is... thank you for your perfect presentation, and a lot of our colleagues here asked about 1‑IX and we are the Ukranian delegation, just want to invite you to NOG UA, because 1‑IX will be there and now another IXes will be there, it will be held on leave from 13th to 16 of November, live I have is a safe city, very beautiful city, and the stakeholders from the electronic communications will be there, the best one and representatives from the government too and you can receive the best practice operator practice from the firsthands, welcome to Ukraine. Thank you. (APPLAUSE.)
KEVIN MAYNELL: OK. Any other questions? Any other questions? If not, I have a question. So you talked quite a lot about power efficient fibre optic technologies but I am wondering about wireless communications as well. How does that factor into the planning and resilience? And also how much sort of, how is this different to the fiber networks with respect to the power efficiency?
PAVEL KOROTEEV: Excuse me, could you please clarify wireless networks, what do you mean, what are you asking for?
KEVIN MAYNELL: Are you using wireless networks as well as fibre optic networks?
PAVEL KOROTEEV: No, we are not because we are dealing with a big amount of traffic, wide data channels so not, no.
KEVIN MAYNELL: OK. All right. There's nothing on the chat, OK. So Pavel, we'd like to thank you very much, thank you for your presentation. And yeah, we'll move to the lightning talks.
(APPLAUSE.)
PAVEL KOROTEEV: Thank you.
KEVIN MAYNELL: OK, first of all, first up I think Shane probably needs no introduction but he is working as an engineer for IBM and is one having previously worked for I OC, RIPE NCC, ARIN, probably some other places as well. But his lightning talk about whether it's possible to get a random sample of internet packets using only /T*P dump and Shane you have ten minutes.
SHANE KERR: All right, thank you, I don't have much time, I am going to get into it.
This talk is about random sampling using TCP dump, so I was doing some were traffic analysis, doing normal things you would be expecting, packet capture and thing like that, the type of analysis I was doing was such that I didn't want every packet of the particular type that I was looking so my thought was I can get say one in a thousand of these or something like that. I usually use TCP dump because ‑‑ mostly because I am lazy, it's everywhere, you don't have write any software, just write a PCAP filter and you are done.
But as far as I can tell, there was no way in TCP dump to do this kind of random sampling, so I basically resigned myself to having write a programme to do this and copy it out to and things likes that, I thought maybe there's a way to do this so I thought about it and while even if the application itself doesn't have a way to pick a sample of packets, there is a source of entropy inside your packets themselves, in IPv4, there's the header checksum, now this is, as far as I can tell, the same checksum algorithm as from when IPv4 was first introduced decades ago. But it's there and it's like most checksums CRC type things, it's designed to be kind of randomly distributed depending on the input that you got.
So it's not, it's really old and it's not cryptically sound or anything but I did some analysis and basically it seems to be uniform and that's really good for what I was trying to do with some random sampling, if you have a randomly distributed input, it's really easy to get a subset of that which is just a simple comparison and so you can see on the screen here, you can with TCP dump with the rules you can actually just say I want to look for IP packets and you can point to the actual offset within the packet of where you are looking and do a bit wise and for the value and if that comes up with a value you are looking at, then it's a match and then that will appear basically based on powers to you right there, that's about one in every 4,096 packets which is what I was using. You can use this for any power of two, everything basically works. I am pretty confident that if you wanted to do more, you could actually pick anything. If you really wanted every 17th packet or something, you could do the math. I will leave that as an exercise to the student. So awesome, great, we are done, right?
Except that we really want to do IPv6 as well and so you can look carefully format for IPv6 and you will see the checksum is ‑‑ oh, there's no checksum, so I guess we are just not going to do IPv6 OK, I guess I am done. No, we really want to do IPv6. So the thing is in IPv6, the decision was made that the higher level protocols or the ones that are supposed to handle packet integrity.
Now in TCP, it's sort of required, it's built into it, in UDP, checksums were optional IPv4 but those were kind of promoted to being mandatory in IPv6 because of this restriction. So I am going to show you how we do this with UDP and IPv6 because I am a DNS person and UDP as well. I care about mostly ‑‑ you can do the same type of thing for TCP, it's slightly different offset in the packet, it's fine, it's also a 16 bit checksum.
And presumably if you are using a more modern protocol like QUIC, it would work as well because QUIC is based on UDP so it should just be able to use the UDP stuff. Luckily UDP is simple, source port, destination port and length and checksum, the checksum is right there, great.
Now, I did some crazy TCP dump work about eleven years ago, I did a presentation I think at RIPE and at that point, you couldn't use the UDP construct within IPv6 and it seems like that is still the case, it's OK though, the headers are such that you can just take out the ‑‑ just use an IPv6 instead of UDP and you can see this looks very similar to what we saw in IPv4 case except that we are using IPv6 in the offsets 42 bytes and we are going to do the same thing.
That's basically it. It gives us a similar sampling in IPv6 that we saw in IPv4, you put it all together, simple, easy peasy, right, there it is.
I don't have a script you can download or anything like that, that's the basic approach and I would just, I was pleased as punch to be able to do this. If you ever come across a place you have this network, you can use it new except of course it's not, it may not be exactly what you need in all cases so this is probablistic sampling. If you really want to pick every tenth packet or every 10,000th packet, this isn't going to give you what you want because it's randomìsh. I say randomìsh because it's not cryptographically sound. It seems to be, based on my non‑scientific analysis, a pretty good approach but it's also depends on the contents of the packet so I wouldn't advise putting in this your data pipe lines because it means an tea tacker who knows you are doing this can influence the results and either hide traffic or fill up your logs or buffers and things like that.
So don't do this for serious things, but if you were doing it ad hoc measurement, quick review of traffic and things like that, it's a really useful technique and that's it.
(APPLAUSE.).
KEVIN MAYNELL: Thank you very much. So we actually do have actually quite a bit of time for questions as it turns out. I believe there's one ‑‑ there's nobody in the queue? Warren is over there, OK. Warren?
AUDIENCE SPEAKER: I am assuming I just missed it somewhere but doesn't this not work if there are IPv6 extension headers?
SHANE KERR: So don't use IPv6 extension headers, I think is the... yeah.
JIM REID: I like it. One thing I would say is have you done any kind of analysis of the results that you have got from this approach by, say, for example, capturing whole packet stream and doing a test on that packet stream with what you are sampling produced ‑‑ to see if the two methods ended up producing similar results? Have you given a way of trying to prove your sampling technique is producing same results?
SHANE KERR: I haven't done any kind of confirmation or checking like you describe to make sure so big scary caveats, double check it's useful for you but it seems like it's pretty good and if you want to use it for more serious ad hoc checks, then you can maybe try this kind of check and if it doesn't work, please let me know and I will stop doing it!.
AUDIENCE SPEAKER: Cool idea. Second part, if you are looking for something more than getting a feeling if you are looking for some error, please include ICMP.
SHANE KERR: Ah, that's ‑‑ I actually don't know if you can do this with ICMP, I don't think ‑‑ is there any kind of checksums?
AUDIENCE SPEAKER: All ICMP, if you are looking for errors, all of ICMP.
SHANE KERR: I see, OK. OK. Yes.
KEVIN MAYNELL: All right. No more, no one else in the queue so again, thank you very much, Shane. (APPLAUSE.).
Right. That brings us it our final lightning talk and final talk this session. So this is going to be given by Maynard Koch, a research associate at the Distributed Network Systems Group at the Technical University of Dresden, it's focused on DNS and scalable IPv6 scanning and this lightning talk will talk about how loops are looking an increasing problem and how they can be mitigated.
Maynard, over to you, you can have more than ten minutes if you want, but only a bit more.
MAYNARD KOCH: Thank you for the introduction. Yes, I am Maynard from TU Dresden, and today I am going to talk about amplification through routing loops in IPv6.
OK. So imagine you send a single ping request to an arbitrary IPv6 destination. What would you expect? You would expect to get at most one reply. But now imagine that you sent this request and you receive like 250,000 TTL exceeded messages and that's exactly what we observed. So a single ICMP echoing request can trigger more than 250,000 TTL exceeded messages from the same router. And don't worry, we know how to increase this even further.
So why is that? That's from the interplay of two issues. Both issues are known since and also discussed since February use but with increasing IPv6 deployment, they become more and more present.
So the first one are routing loops, though which is basically a misconfiguration, and I will come to that in a few seconds. And the second one is the TTL exceeded amplification, which is a software bug that leads to the duplication of the ICMP request and the packets gets looped and it gets amplified or duplicated and there you have a perfect example of amplification.
To start, what is a routing loop, let's consider the falling example. We have two routers, for simplicity, let's say R1 is the provider and the provider assigns less specific, let's say covering /32 prefix to one of its customers, which we call R2, but R2 does not use the complete address space, R2 just uses the tiny fraction of it and it only assigns two more specific downstream routes to, yeah, whatever and for the remaining part of the address space, it just has a default route configured which then gets route or the packet back to R1.
So that's where you can see maybe the routing loop, where it emerges, so let's consider the case, you send an ICMP echo request folks to a destination that lies within the 32 covering prefix but it's none of the, does not belong to the more specific announcement of router it.
So the packet starts looping because it gets routed to R1, R1 has a route configured for this specific subnet so it sends it back to R2, R2 sends it back to R1, this packet starts looping.
And those deployments easily occur when providers assign provider aggregatable address space to customers. And this is far more likely to IPv6 than in IPv4. What finally happens is this routing loop ends when the TTL exceeds, so TTL reaches zero and then ICMP error exceeded message is triggered and that one gets sent back to the victim.
Sorry, not the victim but the client but could be a potential target.
But in our case, it's not a single request, so one of the routers starts to duplicate the ICMP echo request so it gets routed back and in the next loop iteration all the incoming already duplicated echo requests get duplicated again so you create a huge load of traffic between these routers and you also create a huge slot of TTL exceeded messages because each individual echo request triggers an TTL exceeded message. Amplification at its best and we got that confirmed by Juniper.
Yes, how can you prevent that, it's an easy fix. You can just set another route, the customer router so that for the unused address space, just the traffic gets black holed so this prevents the loop from emerging and therefore we have no amplification.
To understand to better understand the deployments in IPv6, we conducted some measurements in the world, what we found were in November 2024, 141 million /48 subnets which means if you send a packet to the destination that's within the 48 subnet, that triggers a routing loop.
This number increased in April 2025 by up to 15% and yes, because of time constraints, cannot go into the details now, but feel free to join the working group session on Thursday morning where it can go into a bit more detail of the underlying result.
Of these routing loops, we observe amplification for 7.4 million subnets in 2024 and in 2025, this also increased up to 10 million.
So to conclude, loops are bad and amplification is worse. IPv6 deployment makes routing loops far more likely than in IPv4 and some IPv6 route implementations, they do duplicate this looping packets and we can expect an increasing threat potential because of increasing IPv6 deployment.
OK. So if you operate an IPv6 network and you also use default routes, please also install null routes.
If you do IPv6 scanning, like we do, please exclude networks that lead to routing loops. We don't want to have a high impact on the network load because of our scans, so if you need any data, come to us, we can provide them.
Last but not least, do not use unnecessarily high IP TTL values, when scanning. For example, a value of 64 should be sufficient enough. And I would also argue that lower values are sufficient.
Yeah. So that's it, thank you for listening. Happy to answer any questions.
(APPLAUSE.)
May all right, thank you. Yes, a question there.
AUDIENCE SPEAKER: Thank you for your work. My name is Antonio Prado, Fiber Telecom. I received your email. OK? You said we can provide full access to all data, OK, you also wrote we would be grateful for any feedback. Can you confirm our observations? OK, I wrote back to you and then you replied thank you very much for reaching back to me, in case you already fixed the observation, etc. We received more emails than we can answer individually. So we received any insights? Any detail? Just asking.
MAYNARD KOCH: Yeah, so there would be one more slide in the working group session about this may campaign. We created a lot of noise, so sorry for that. We tried to ‑‑ I think the second mail should contain data at least for some examples, so you can confirm, but we can check. I can send you the full data sources if you want for your AS, that's not a problem ‑‑ but it's hard to just send a thousand mails with loads of data. I mean, we can do but actually to be honest, we didn't expect that May campaign, to create that lot of noise, yeah. So but yeah, thanks for your comment.
OK. A question there?
AUDIENCE SPEAKER: Just wondering out of all of the ISPs that are, you know, officially supporting MANRS, have you noticed any of them actually experiencing this issue? Because theoretically, it should stop it, right?
MAYNARD KOCH: Can you repeat it.
AUDIENCE SPEAKER: Are there any ISPs that claim to be fully compliant with MANRS that are actually impacted by this because MANRS like filtering should be stopping.
MAYNARD KOCH: Yes, it should. I mean but...
AUDIENCE SPEAKER: Were there any that you noticed or did you not look into whether or not people were claiming to be MANRS compliant?
MAYNARD KOCH: I did not look into that yet, but that's a good point.
AUDIENCE SPEAKER: I just have a question about this software bug, it's more like general bug or is it like a particular render or something?
MAYNARD KOCH: We found that back, back with the high amplification rates of more than 200,000 replies for a single request to come from Juniper routers. And they asked their ‑‑ their chip set provider ‑‑ and they just replied it looks like it's something with their chip set, they didn't go into much detail but yeah, they said they won't fix it, bad, but yeah.
AUDIENCE SPEAKER: It seems to be like hardware related. OK. Interesting. Thank you very much.
KEVIN MAYNELL: All right, there's no one else in the queue, I don't think. So again, thank you very much, Maynard.
(APPLAUSE.)
And that brings us to the end of the session. I'd like to remind everybody to rate the talks as ever. The speakers are incredibly proficient with time, I think you have got about ten minutes extra for coffee so please go and enjoy. Thank you.