RIPE 90

Daily Archives

RIPE90.
Friday 16th May 2025.
Main room.
9am





Plenary.



BABUK FARROKHI: Good morning, hello and welcome to the final day of RIPE meeting in Lisbon. This is the first plenary session for this morning and we will have one, we will have three lightning talks at first and then one talk afterwards before the coffee break.

I hope everyone is well rested and caffeine ated for this morning and with that I would like to invite Christopher the Cloud architect from Deutsche Telekom to talk about BGP EVPN to the Kubernetes host.


CHRISTOPHER DZIOMBA: Good morning. So we have in the last couple of years used BGP EVPN to the Kubernetes host and I will be telling you today what we have done and how we have done that.

So first of all, how do we use Kubernetes at Deutsche Telekom and specifically in my team. We have a platform that we call T‑CAAS, formerly known as Das Schiff and it's a Kubernetes platform for Cloud native or containerised network functions, such as the 35Gstand alone core, he telephony services such as IMS, and some OSSsystems around it basically.

We have built that with vanilla Kubernetes and a lot of OpenSource components actually. We focus mostly on bare metal, we do a few VMs but really it's a bare metal platform, we have to worry about mayor metal networking in the end and it's becoming the common platform for all of the DTnal companies here in Europe.

So why am I speaking about Kubernetes at RIPE. And specifically here, well we all love BGP, I think it's a great protocol, it's also a great protocol in data centre and not only I think that but also if you know that RFC from I think it's written by Facebook and Arista as far as I recall, it is a well known protocol in the data centre sphere and this work was also inspired by a presentation from Atilla de Groot earlier in 2018, he did something similar with BGP EVPN to the host and the thing is why are we doing all that, why are we doing BGP with EVPN and Kubernetes together. Well Kubernetes pods are essentially layer 3 end points.

So they have an IP addressing they can be fully routed, no one cares about layer 2 any more, well hopefully, some of the CNFs we run do require layer 2 because of the vendorships and it's just a requirement for some weird protocol on top.

And the thing is we have said hey, EVPN can actually solve both problems here, it has type 2 Mac IP advertisements and type five host and prefix routes and it's also well known for virtual machines, for example the Prox‑mox, shipped since 2019 as I looked yesterday, also EVPN has anSDM solution.

So we are looking at Kubernetes pods now. So I have to two pods on here and a layer 3 VRF, we assign one cluster VRF to each cluster and this one is used to establish connectivity between the pods but also between the nodes. So what happens if we now need to communicate to some outside system or some out individual VRF, we actually do local route leaking on the nodes themselves, so we have some kind of data centre gateway border leaf forever that exports routes for our backbone routes and we can locally route leak that on the host.

But we cannot only locally route leak on the host to establish connectivity to the outside world, we can do layer 2 V NI with Anycast gateway to build on‑the‑fly layer 2 networks on top of the host without deconfiguring anything on the network fabric.

So, how do we do that?

So this is kind of a new design, our old design looked a bit different. We have on the left side kind of non‑Kubernetes installation, it's really I think we use, yeah we use... CNI, it's really, really standard here, but the thing is we have thisHB R container sometimes from outside called containerised tracerouting agent, also installed and sitting on the nodes. We either supply it directly from the provisioning of that node so it's there from the beginning, it can do all the connectivity of the node, we don't need any other connectivity any more or as another option, it could be installed later on top of that Kubernetes cluster as well, to build EVPN services on already existed cubs cluster

SPEAKER: It takes over one of the network cards from the host and that peers with our EVPN ready data centre fabric here.

One thing to mention is that of course this containerised router needs to be administered so there can be an API to do that from the local host so it's not like there is a central orchestration system, everything inside the Kubernetes cluster and one caveat that exists the pod when is needs SR IO V connectivity, we still need to rely unfortunately on layer 2 to through to route the fabric, currently there are no smart Nicks DB us already, how does it look like from an automation perspective.

We have a solution called network operator which heavily relies on Kubernetes custom resources, this is how you define resources inside Kubernetes and you have the option to configure layer 2 network configurations but also the option to configure real... float we cover layer 2 and layer 3 VRs. So actually do it node by node and if it fails you roll back and we actually think that we want to go to a pluggable architecture here, because if you remember the last slide, you have to this containerised router you, theoretically it can be changed to another system if you integrate the APIs with it, it's making it pluggable here, we use FR routing for that and we are very happy with that so everything here is OpenSource, you can look at that link down below and try it out yourself.

However looking at future work here. So the network operator is not completely finalised yesterday, we can deploy internally but there was not so much documentation there that you can easily give it a go. There is a project by Federico Palinelli who is the maintainer of Meta A B, he has developed open PE router which does something very similar and inspired by this work here, so if you want to try EVPN and Kubernetes cluster, I highly recommend that one.

We also collaborate with the Sylva initiative which is a collaboration of telcos regarding OpenSource, Cloud native Kubernetes, in I am not sure if it's EU and also Europe and we also try to bring that concept or we actually bring the concept to them as well. And we also are looking because you have heard me talking about SI OV earlier, we are looking into DPU‑IPUs so we don't rely on the network fabric any more that has to be reconfigured.

So with that, I am done. And thank you, I will happily take your questions or comments.

(APPLAUSE.)



BABUK FARROKHI: Thank you very much, very interesting presentation. Any questions in the room? We have yes, we have one.

AUDIENCE SPEAKER: We have one online.
"The custom resource definition configuration format appears to be an excellent solution. Have you thought about automating provisioning of multiple layer 3 VPN instances at once using for example single Kubernetes CRD? Do you need CRD creation using KSAPI?

CHRISTOPHER DZIOMBA: So we don't really care if you create one layer 3 VNI or more of them or layer 2 VNIs as well, it's just the API that's there, if you put more like instances of it into the Kubernetes API, it will provision all the instances as well. Yeah.

BABUK FARROKHI: I don't think we have any time for more questions, you can find Chris in the hallway, thank you very much.

Thank you, Chris.

(APPLAUSE.)


BABUK FARROKHI: Before inviting the next person, I would like to remind you to rate the talks to help us to do better next time, both Programme Committee and presenters and talking about the recent events, Emile from RIPE NCC will talk about how RIS saw the Iberia power outage.

EMILE ABDEN: This is a talk about how RIS or routing information systems saw the power outage here and locals and others have probably noticed that or have heard about this event happening so there was a great power outage here and ‑‑ a grid power outage here and I have seen a lot of talk about what it did to traffic to the amount of, well eyeballs dropped off and you can see for instance in this graph from, that's DE‑CIX Madrid, traffic dropping here, we also see this in RIPE Atlas, here you can see the number of connected probes in Portugal and in Spain, they at 10.30 UTC so 11.30 Portugal time and 12.30 Spain time I think, it's just a steep drop. So no end users, so traffic drops. I was really interested what RIS our ruling information sees in terms of how much connectivity was there, how much yeah, how did the internet see Spain and Portugal.

So first cut did our, the system by which we observe routing, did that see any effects and yeah, this is a graph from our route collectors, we have one in cat things in Barcelona and in total we have over 1400 peerings and here we basically see nothing, right, it just performed as ‑‑ and I actually find that fascinating and as you see in a later talk with RIPE Atlas with Atlas anchors, they have, they experienced power outages, it's more difficult to measure there and who here knows RIS? Who here doesn't know RIS?

Who is not awake!

OK. If you don't know RIS, check it out, ris.ripe.net, and if you feed routing information to us, thank you for doing that.

First cut, the BGP become quieter, noisier, what's actually happening here and this is basically, and actually I was able to do all of these graphs because my colleague Ties de Koch is working on a better way of crunching RIS data and I am an alpha tester for that, and I hope that content will come to RIPE to the next RIPE meeting.

But what we see here is total volume for five minute interval and notice it's a log scale and I don't see any significant effects on the whole internet but yeah, BGP is super noisy, right, this is 10th to the 7th update in a five minute interval so what would you expect for an outage if you have like prefixes going down, you would expect there's less noise here but there's link instability so there's updates for specific prefixes and that creates more noise maybe?

So then I dug down into the per country level of this. Oh yeah, for context, because I think 3 D graphs are cool, this is the AS graph of Portugal to the left and AS graph of Spain to the right and you can see quite a different complexity and the bigger the node, the more important it is, the home country is in light blue here and for ASes where they link up to the rest of the internet are a different colour.

Yeah, and this basically is to show the difference in complexity between the two networks.

If you then look at the origin ASes for messages we see in BGP, you actually see this effect which I find fascinating. Before the event and this is a two day interval, two day all my analysis are over two days, they start at April midnight and end at 30 ‑‑ 28 April start, 30 April the end.

You see a lot more volatility in Portugal and you see a level shift in Spain, so there's really different effects going on, I don't know what caused this but I find it fascinating. Then if I just look at the biggest blue balls for each country, so the apex ASes for a country, you see all kinds of different effects, it's like there's not a single thing happening so there's multiple things going on all at the same time.

And then I actually dug into what does this look like for individual prefixes, were they up or were they down, each line here is an individual prefix and you can see lots of different behaviours and these are just two examples for Spain, so a lot of it is actually here, a lot of it just stayed up, there was like prefixes kept being visible from the rest of the internet, another category I would say is prefixes that went down right when the power outage happened, here's some of that down here for instance.

But yeah, the fascinating part is that the ones were, they stayed up for a while and then went down, right. So and this has all to do with having back up power and having back up power or not and how long did that back up power last.

And this is the same graph for Portugal, same type of effect, lots of stuff just kept, was up so from the rest of the internet, Spain and Portugal mostly were online.

Here's ‑‑ and then I was looking at how did the traditional Tier 1s see this actually, so what I actually did in the RIS data, you have like specific BGP communities, sometimes they are stripped but let's forget that, for some of the Tier 1s who actually tag a route as Spanish or Spain.

And if I just plot the ones that have the most percentage of this, you can see some fascinating different patterns for all three of them.

Let me zoom in.

For instance the top one here is Cloudflare via Cogent and you just see that kept just working and then yeah, some other behaviours, some short drops, some were for instance the third one here the Cogent connectivity drops for a little bit and the visibility also dropped for an Eli Lilly bit. These are all fascinating patterns. And that brings me to the conclusion, well there's lots of stories about the traffic level drops in RIS, we actually see that a lot of the inter connect infrastructure, yeah, is more resilient as you can see from the traffic levels alone, so it brings me sort of a philosophical thing, what is the internet and is it, yeah, so in as far as it's the interconnected network of networks, that kept functioning for the whole part and I hear horror stories here about data centres generators blowing up but I am actually fascinated by the part of this that kept working so well of course internet needs power when your grid power goes off, you have an issue but this needs people so this needs capacity, this needs people who plan ahead, this so these six hours of like when a prefix kept up and then went down, there's a lot of planning to actually make that happen and also especially for the prefixes that kept for the part of the internet that just kept working, there was a lot of planning on how you would deal with these types of situations and probably there was a lot of scrambling acting in times of emergencies and people who kept their heads cool and acted.

And what I would love to see is that we capture the stories behind these success he is and failures because this is just a data view of it, it's so much stronger if this also has the stories behind it, what did work and what failed so we can learn from it.

And yeah. So it's actually great that we have the next talk about actually that, the actual stories of what happened on the ground. 18 seconds left. So that's, 14 seconds for questions!.

(APPLAUSE.)

BABUK FARROKHI: Thank you very much.

I have a couple of questions for you but I don't think I can squeeze them into 14 seconds, I will catch you later in the hallways.

Thank you. Next speaker.

Still talking about the recent outage, we heard a lot about BGP prefixes disappearing and everything but the next speaker, Amedeo from ESPANIX is going to talk about different dimensions of the same outage, the talk is titled Portugal and Spain block and its recovery. Please.

AMEDRO BECK PECCOZ: Yes, thank you, hello everybody. Sorry, how does this thing work, OK, I found out.

So I am going to give you is a view from another perspective. And in the sense that I am going to tell you what the facts, what we have seen, not what has happened in the sense that not the reasons that are behind the power outage because I am not an expert on this topic, I don't really know. I am going to tell you what we have seen and I am going so their this with you and then I also, I will also tell you a couple of anecdotes behind that day and how we learn to change and to improve possibly for the future, OK.

So what happened in that on that day, I put 1.33, that's Spanish time, Spain time, now 15 gig watts apparently disappeared in five seconds. Just to give you an idea, 15 gig watts, it's it's 12 DeLoreans and I hope you do get the reference.

It's lightly love 12 DeLoreans, so it is a lot.

The good thing is that apparently France realised that something was not going right and so they cut because Spain is connected to France and Portugal and Spain are connected together and then Spain is also connected to Morocco so France disconnected, that's why they didn't propagate going north the blackout.

Portugal got disconnected, possibly the problem originated in Spain but as I said, I am not an expert so I am not going to say much more on this topic.

So what went off immediately we had anything that works on electricity. Most important things are trains, tram, Metro, lifts and red lights.

Now, this is not just the list, imagine all the people remain trapped in the underground or locked in the lifts and everything happens all you have a sudden.

At the same time no payment was possible. And also telecommunication services went down, now this is completely in line with the presentation we just heard from Emile because the access networks went down but the core kept on working, somehow.

Over time so that means in the following hours, other services started going down, so the first thing that went down was the cell network, it lasted for two hours the data communication and it was interesting because you could see that 5G fell first and then the other G went so in the end you could only have maybe a 3G connection and eventually it went out. If, after approximately four hours, also the calls went down. Small data centres went down and the thing is that everybody is prepared to this kind of events for a limited amount of time, so everybody thinks yes OK, there's going to be on outage but the outage will be short in time and will be localised in space but nobody is prepared for a long time and for a wider space so a country wide. Now this is affecting ‑‑ these effects, we are going to see them. OK, this is the first interesting thing I will share with you, one of my colleagues was stuck in a traffic jam of course, going from one place to another one, from one data centre to another one and he was listening to radio, a local radio and so the speaker suddenly said OK guys, sorry I got a message from our technicians, he said we have got approximately two minutes left of power to broadcast and two minutes after that, they cut the conversation so it was real, OK.

So one of the key factors as logistics, the gas stations were out of service, huge traffic jams because the red lights went off, so it is really impossible to refill any tanker truck so even if you have contract and this is something we experience ourselves, because the Spanish also has a data centre and of course we have a very good contract with supplier of fuel and they are bound to provide us fuel whenever there's a shortage.

OK. We have an autonomy of around one week so it was nothing urgent but still we place a call and say hey, can you bring us some fuel and they said no, no way, yes but we have contract with you, it doesn't matter. It is physically impossible for us to bring you the fuel because the traffic jam is global, there is no electricity to fill in the truck and normally we would have taken from maybe a nearby region where there still is power, even if it's I don't know a one hour drive, OK, the truck drives for one hour and in one hour you have the fuel but if it is countryside, we cannot have a truck coming in from I don't know, France.

So power generators eventually went empty, at least those who were not big enough, OK.

At the same time some other services kept on working, we have airports and hospitals, national TV and radio. Now the thing is that they were broadcasting but not many people were actually able to receive it. We found out that everybody has a radio in our cars. So water supply kept on working, network cores as I said was still functioning and then main data centres yes, unfortunately with one notable exception, as far as I know they kept on working.

OK. Now, the situation in the afternoon was the worst one because we have no phone, mobile and also land line went off. No public transportation or private transportation, traffic jams, no fuel for generators, that's when creativity kicks in. And so these are the interesting parts, OK.

In a small data centre in Madrid, I won't reveal the name, they have a single generator but they pulled out fuel from employees cars, of course they asked the employees first!

And they said: OK guys, you are not going anywhere in any case. You know. And the power came back when they have approximately 20 minutes left of power. They have been lucky.

A public water company in Spain, I will also not share the name, they had a tanker truck stuff in traffic so they called the police and... hey, we are the water company, can you send an a police to rescue one of our trucks, no. Why not. Well because now because we are the police, we do other things. OK but we are going to cut the water to the whole population, OK, then we do it and they did it!

OK. Yes, satellite phones, everybody says why don't you use satellite phones, actually we do and that's funny, we have one satellite phone which can be used for emergency. Completely useless! Because you cannot call anyone except another satellite phone so either you provide a satellite phone with every person that you need to communicate to or this is useless, OK.

No phone coverage, that was really creative, one of our clients, one of our members, they had two guys in two different data centres and they managed to communicate over the router management port, the VRF of the management port, creativity, you know.

OK. The last three slides and then I am done almost, reflect the traffic. This is absolutely in line again with what Emile said, the first line it goes down, it's when we had the power cut. And then it kept ongoing up and down, the power was restored about half of the picture, which is around midnight, starting from midnight. The traffic still kept ongoing a little bit down and then actual record recovered the following morning.

We have one of the ESPANIX pop that went zero, one of, this is another data centre and they had the zero, you can see the zero so when the graph goes down is really zero zero zero, OK.

And then the span IX data centre, we remind there. So recovery took 18 hours, approximately. Someone got it earlier, Portugal recovered autonomously, France and more rock could he helped Spain, thank you very much, all the politicians said it's injury fault, we did it good, no guys, you did nothing, OK. What we have learned, changes in Spanish contingency plan, the important people of the company have to go to our data centre in case of power failure and in case of power Faye your with we communicate with our customer portal of course if we can access the customer portal. And that is it. Thank you.

(APPLAUSE.)

BABUK FARROKHI: Thank you very much, fascinating presentation, unfortunately we don't have time for questions again but we know where to find you, thank you very much.

(APPLAUSE.)

Now we are talking about outages, let's talk about outages in the sea and Emile again, thank you very much and his colleague Joaquin are going to talk about submarine cable outages.

EMILE ABDEN: Thank you. Yeah, Emile again, my apologies, but this time I am joined by a fresh face, my talented student, Joaquin and we are going to tell you about the collaborative work we did on how the router run, in this case cable damage in the Baltics.

Yes. First we published two RIPE Labs articles on this already. Yeah. So if you want to have more in depth on this, that's the place and it could also be a great place to collect the stories about all of these outages and stuff happening. So that said, the Baltic sea, with all of the things going on on the internet, you might have forgotten but there was actually a couple of months in November until January when there was a couple of Baltic Sea cables that had damage and this is a timeline of the things that we analysed and as you can see, we focused on these three cables. One thing that you can notice here is that cables have an outage and in this case there was probably a lot of repair ships in the neighbourhood. They got repaired quite quickly.

So, a lot of media coverage on this. Lots of speculation and if you were in the Co‑Operation Working Group yesterday, you also saw a nice presentation by a retired Navy captain, with more depth on the different ability of these cables, also speculation on if this was sabotage or not.

And on the other hand, I also hear stories from mostly the cable industry saying yeah, cable damage happens all the time. So, we plan for this, we know this happens, we have repair ships. And maybe to disappoint you a little, this talk will not have any new information about was it the one or the other, what we actually do is we try to use neutral data to see what the effects of these types of events are. So we use RIPE Atlas, surprise!

And I am guessing there must be at least 100 RIPE Atlas probe hosts in this room but for those who don't notice, it's very big measurement network, 13,000, over 13,000 nodes and in this case for the measurement, you have the global pictures actually here where we are deployed now. Out of these, we have over 800 anchors and these are super interesting like the one, the bigger measurement devices because they measure in a mesh, so any time a new one comes online, all the others start measuring towards it so we basically have a good global mesh of all kinds of latency and traceroute data about the internet.

So it's a really rich dataset and we did dig deeper into the dataset specifically and how we did this, for these events we take the anchors in one country and then in another country and well that's typically the two sides of a cable, and just see what the, in this case the latency data, the ping data tells us around the time of these events. So it could be that, for instance, if you look at Finland to Germany, that route goes via Sweden, this way, so not necessarily via the cable. And another thing to notice, the orange dots here are actually the locations of the anchors and the table here has the number, it's really nice to have a larger number of anchors here because then if the number is too low, your data becomes a bit more anecdotal, so Lithuania had five, that's like borderline of what would be nice to have for these types of analyses.

So yeah, first two and then we created some visualisations and I am handing over to Joaquin who will tell you about how the visualisations work. Yes.

JOAQUIN VAQUERO ORTIZ: Thank you for the introduction and now we'll go over the first event which is a BCS east‑west cable outage. Before I start, I do want to explain what these routes mean, they have got a lot of data and graphs in general, so I want to go over them. First of all to keep in mind on the X axis we have, I don't know if you can hear me as well, on the X axis you have time, and on the Y paths between anchor pairs and you can see the darker the colours in the graph, the higher the round trip time latency increase over the baseline. So that yeah, we just normalised across paths.

So now that we know what this means, we can see where the event happened, there's a clear shift in latency across around to 20% of the paths and yeah, they will go over the graph at the top of the plot in orange here we can see the average latency increase at each time step combined with a plot for a percentage loss and here if you look at this, we can see a clear step or increase in latencies overall.

And we do not, it's clear to point out we do not see any sort of increase in packet loss or significant significant increase in packet loss in this event. I want to zoom more into the graph, maybe a bit, I don't think at first to visualise what it means but here we zoom into a line, we can see that here the lighter colours are basically no increase over the baseline or very little increase and then if we move on we can see how there's a clear jump and here it goes to a bit of a darker colour, hopefully it paints a bit of a clear picture what it means.

So we'll move on.

So this is the next event we are going to talk about which is C‑Lion cable outage, this event we can also see a very clear jump in late sees across paths and this event happened at approximately 2am UTC on the 18th November, approximately one day after the previous event and yeah, we can also see if we look up top here, we can see also a giant, not giant actually kind of pretty small but a small step in latency but we also do not observe any increase in packet loss either, which is surprising.

And also if you wonder what this is, it's probably some congestion over here but we don't have any like clear evidence for this, it's just speculation at this point.

So moving on.

So this cable was repaired on the 20th November at around 7.30 UTC, the repair ship cable announced a successful repair, and here your eyes may be drawn to the large spike over here in loss in latency, you can see there's around a 20% loss, packet loss which is quite a lot, we don't actually know what it means, if there are any optics or undersea cable experts in the crowd, we would be happy to have a chat with you guys, maybe you can clear up some doubts or you guys could explain what it means.

But yeah, we can, what I actually find fascinating about the graph, we can see parts of the mesh slowly going back to baseline latency probably being rerouted through the repaired cable. Also, we don't really know this for sure but eventually all the paths go back to their baseline latency. So yeah.

Moving on. So summing up, we did see an increase in small but visible increase in latency across around 20 to 30% of the paths for both events, but we did not see any sort of visible or noticeable increase in packet loss. So yeah, in conclusion the internet routed around the damage. A clear testament to the resiliency and adaptability in this part of the world, but we will see it's not always the case. Resilience is not always guaranteed.

Also moving briefly away to what we talked about before, yeah, we have been trying to adapt our measurement techniques to other types of events, we have been doing cable outages but now we wanted to shift towards power outages as well, here we can see the data from the Spain Portugal power outage and here we created a mesh, we created a mesh with Spain and Portugal, we combined them as if they were their own like one whole entity which kind of gives us a better view of the event, and here we can see that there's not really much going on after the event for a couple of hours, which suggests that the use of generators or back‑up power supplies and after that, we do see a lot of anchor loss which is this whole grey blob, this brings me to the point that these types of events, measuring them with the anchor measure, analysing them with the anchor mesh is not really feasible just from the fact the infrastructure we used to measure these events is taken down by the event itself so yeah, it's not really ideal but Emile will go a bit further into this now. Yeah, I will pass on to Emile for the deeper dive.


EMILE ABDEN: Yes, thank you. We did a second RIPE Labs article, all of this until this point was based on latency data, ping data. But we also have traceroute data right and it's, so we have IP addresses in the path between our anchor nodes, so we can actually do a deeper dive into yeah how to get a sense of how things were rerouted and what types of things we can see that could explain some of the res yen see, so we divided this up into basically three different types of things like it's the interdomain rerouting, your classical BGP, like, A route goes down, you have a back up path and stuff gets rerouted and you can see this if you look before and after the event and just compare the sets of IP addresses and do an IP to AS, you can actually see the AS path between two nodes changed.

Second is inter‑domain rerouting, you don't see different ASes but you see different IPs so that yeah, so something within an AS broke and the AS rerouted around that and the next one is a circuit level rerouting, we see a latency shift but we don't, we just see the same IP addresses in the path and a long story short, we saw all three of these so if we take like a three milliseconds cut‑off as that consists of a significant latency shift, we see, yeah, here you can actually see the ECDFs of all of these. More details in the Labs article if you are interested in this.

And another thing we did was look at other events now we basically have the methodology but RIPE Atlas data goes back years, right, so there was a really interesting event of a submarine landslide, so that damaged a lot of cables off the coast of Africa. Now we have enough anchors in South Africa and we have anchors in the UK and they are probably routed through these cables, right, so we can actually see what happened and this is a router ‑‑ a rather messy picture, the lines gives the time of the event and as you can see here you have if you look at the axis, there was significant packet loss here and this is like 40, 60% so this is a case where yeah, where we see a latency shift that is accompanied by packet loss so its resilience is not guaranteed.

In conclusion, internet routed around damage and we actually see three, there's three levels of rerouting happening and we see all of them happening at least in the case of the Baltic Sea, the cable outage is there and yeah, to actually get a better picture of this, we would like to keep monitoring this, measuring this, understanding it better and for that, we are really happy with the RIPE Atlas coverage but also in some locations we would love to have some more. So we actually are now looking at if you look at this on a country level, we would like to have a certain level of a baseline of deployment especially for countries that have cable landings and so we created this, it's an online tool we have in this, we have the number of landing points per country, that's this and this column, and then we just like a straw man minimum deployment is five anchors in three different locations in three different ASNs and if you have that for a country, it lights up green so we have a lot of green countries in this part of our service region. I have to point out Portugal, so we have anchors in two cities not three, so that would be great if we could actually up that or figure out that it doesn't make sense to have because everything in Portugal happens in Lisbon IPv4 tow maybe. There's another one for yeah, that part of the Europe part of our region is quite well covered, the Middle East part is a bit less so. So for those who are able to deploy RIPE Atlas anchors, we have Michele sitting here who is in charge of that and she would be happy to talk with you about deploying new anchors specifically in the countries where we don't have this diversity yet because we'd love the diversity in this. And that brings me to the questions. So, thank you.

(APPLAUSE.)



BABUK FARROKHI: Thank you very much for another interesting and useful presentation and I see you only have questions in the room.

AUDIENCE SPEAKER: Will, I am sorry this is going to be a comment, I feel... (Inaudible)

All right, other questions from the audience in the room or maybe online? That gives us a few extra minutes for a coffee break. Thank you very much for attending this session.



SPEAKER: Just a quick reminder we'll have the GM election results announced during the coffee break, if you would like to grab a coffee and come back for that?



BABUK FARROKHI: And also PC election results.



Coffee break.