Thursday. The 9 am
RIPE 90
IPv6 Working Group
Main room 15 May be 2025
RAYMOND JETTEN: Good morning everybody. Nice to see the whiskey BoF is over, I hope you all had some sleep, I did but not too much! Welcome to the Working Group, IPv6 Working Group this morning. We have four different presentations, and ‑‑ we first, however, do something about the etiquette. If you come to the microphone, have something to say, please state your name, your affiliation and please remember we have a Code of Conduct as well.
The approval of the RIPE 9 minutes. They have been on the website for I don't know how long. Long enough, thank you to the RIPE NCC for making them. We haven't seen any comments, which is usual, I don't know if anybody actually reads them, but it's your choice, so, if anyone has anything to say about that yet? No?
Then they are approved. Thank you for that.
Then first talk now.
MAYNARD KOCH: Good morning everyone. Welcome to my talk about amplification through rooting loops in IPv6. This is the extended version of my lightning talk which I already presented on Tuesday, so, the next few slides, if you already listened to that the next few slides might look familiar to you, but I do have some more slides and insights I want to share.
So, let's start.
Imagine you sent a ping request, a single one to an arbitrary IPv6 destination address. ‑‑ so, what would you expect? You would expect to receive at most one reply for your single ping request. But now imagine if you, instead of receiving one reply, you receive thousands, hundreds of thousands of ICMP and TTL exceeded messages. And that's exam what we observed be, a single ICMP echo request triggers more than 250,000 replies from the same router, and don't worry we also now how to increase this even further. Why is that? So, this is the interplay, a combination of two issues, which are known and also discussed in a few years. The first one are rooting loops which lead to unnecessary traffic between the looping routers, and the second one is a software bug about TTL exceeded amplification, so packets that get duplicated causing even more network load between the routers and also causing a huge load at the client that sent the request.
So, what exactly is a routing loop? Imagine, you have two routers. R1 and R2, you that you know the first one R1 is the provider and assigns a less specific covering prefix, let's say a /32 to its customers, to its customer R2, but R2, however, only uses a tiny fraction of the address space, so it son‑in‑law configuration pardons for more specific shareholder 48s, let's say for Bodo and for Dodo, and for all other remaining paths it just has the default route, so, each packet that does not fit into the path that are configured are just routed back to R1 by the default route.
And that's exactly where the routing loop emerges. So imagine you want to check if your number 1 is open, so you send a ping request, and what happens is that R2 does not know where caffeine number 1 is. So it just uses a default route to R1, but R1 has a less specific cupping route assigned to R2 so the packet starts looping, and yeah, that's actually pretty common in IPv6, because you have these are provider aggregatable address space and you get large channels of address space and if your customers don't care that much or maybe not, they do not know how to probably configure the route, that's where the routing loops might emerge.
So, in the end, you get the ICMP TTL exceeded message that gets triggered when the packet, the time to live of the packet reaches zero. So, ICMP TTL exceeded message is sent back to the original source of the request. But in our case, it's not a single reply but 250,000 or even more, and that's because of a router bug that duplicates the incoming ICMP echo request, to it sends back two ICMP echo requests, and in the next loop iterations, these two arrive at the router, and they are doubled because each individual echo request is duplicated, so, that's an exponential increase, and on the one side, the exponential increase is between the routers, so you start to create huge loads of traffic between these routers, but finally, each individual echo request triggers an video ICMP TTL exceeded reply message, so these are all sent back to the original source, which is also a nice target for attackers to just flog the victim with ICMP TTL exceeded messages.
How can you prevent that? It's actually pretty easy, there are more ways to prevent that. But one of the simpler ways would be just to set another route to require your customers to have no lots configured for the address space they assign to their customers, and if all the other traffic gets blackholed for the assigned address space, no routing loop can emerge.
To better understand the deployments, we conducted some measurements, and we found 141 million routing loops, or looping subnets, which means you send a request to one of the 48 subnets within the, to one of the 48 subnets and you trigger a routing loop. So that increased by 15% in the second measurement up to 162 million
Here you see on the Y axis the number of looping /48 subnets, and on the X axis, we just rank the AS numbers that have routing loops. So in total, we find more than 5 thousand ASes are affected by these rooting loops, distributed over 155 countries, and 18 percent of all looping subnets, they concentrate in only 5 ASes, so there is a dense spot, and 55 percent of all looping subnets concentrate also in only five countries.
Are so, how many of these looping subnets do also amplify the responses? We found that 7.4 million of the /48 subnets do also amplified in November 2024, and this also increased to 10 million in April 2025.
So, this plot is pretty similar, but instead of the looping subnets, it now shows the amplifying subnets. We do find about 1,900 ASes to be affected, distributed over 100 countries. 9% of all amplifying subnets they concentrate in ten ASes, and interestingly, 82% of all amplifying subnets concentrate in Openflow five countries with Brazil alone accounting for 72%.
So what about the ample facial factors?
So, you might notice most of the amplification factors are rather low. So 99% are below six. But for the 1%, they go up to like 10,000, 50,000, 250,000, and we found out that 8 routers are especially affected from that, and they do reach these amplification factors of more than 250,000. And they are operated by a single German SIP. So, we contacted them and found out okay, these are Juniper Layer3 switches of the EX production line. That's not good. But it's very bad that it won't get fixed that easy.
So, what did we else try to do? We tried to inform all of the affected network operators, and we started an e‑mail campaign, and it kind of back fired. So, first of all, sorry for all that noise we created, but to motivate the operators to reply to our main campaign, what we did was we kept the initial mail intentionally vague without any specific details of affected devices. So in the end we received more than 1,000, actually I think it was about 1500 replies, including phone calls in my office, very nice, so you if put your affiliation in the signature, people will actually try to find you and check if it's legit, that's good, to be honest, we definitely didn't expect that. So why the majority also appreciated our work, and so did we and reply, thank you very much, we also received a lot of complaints about spreading panic by not sharing specific information at first hand. Learning here or a promise for the future, we will include all important details in the first mail, of course. But please reply, even if the e‑mail had already fixing the issue, we rely on your help.
So, the impact of the e‑mail campaign, it actually worked out. So, we got fewer loops, we reduced the number of rows loops by about 7.7 million and 134 ASes just have no more rooting loops, so very nice, we also see less amplification for the number of amplifying subnets, so, in that case, it's about 54 ASes that reduce the number of looping subnets by just setting nil routes, and of those, 32 just removed completely from the landscape.
So, great! Overflowing inbox but still some impact. But there is still a lot of do. I mean you saw that the number of rooting loops increased, so, yeah, we need to work together and fix the issue. In particular, given that this is already a known issue.
So, in conclusion, loops are bad. Amplification is worse. And IPv6 deployments make rooting loops far more likely than in IPv4. And some of the IPv6 router limitations they do duplicate ICMP requests creating huge loads of traffic at potential targets of such a flooding attack and we can expect and increase in strat potential because of increasing IPv6 deployment.
So, if you operate an IPv6 network, and you use a default route, please also install null routes.
And that's also we kindly ask providers to tell their customers to teach them how to proper set these null routes. And if you do IPv6 scanning, like we do, we always want to keep the impact on the the load and the networks low, so, try to exclude the networks that lead to routing loops. We can provide data for that, if that helps. And please do not use unnecessarily high IP TTL values. Because the higher the IP TTL value the more amplification you get. So, a value of of 4 should be sufficient in most cases.
That's all I have for you today. Thanks for link. I am happy to answer any questions.
(Applause)
AUDIENCE SPEAKER: Jen Linkova. Thank you very much very interesting. I am thinking about rooting loops. I suspect, I am wondering if you looked into it, I suspect part of it that might be broken CPEs which do not install null route for delegated prefixes. Right, CPEs are supposed to do this, but I wouldn't be surprised if they are not, especially when you see so much of the countries specifically, maybe they are using specific vendor popular in the country. I am wondering if you looked at interface ideas of responding routers to see if you see like any pattern in the UI 64 based interface ideas and see if it's. That's why you see it for in v6 than in v4 because in v4 we don't use prefix delegation. I am wondering if you looked into that.
MAYNARD KOCH: We did not look into that but that's a good point for future measurements, so I will keep that in mind, thank you very much.
RAYMOND JETTEN: Then we have a question online from Alexander ‑‑
AUDIENCE SPEAKER: Brian Candler. So, I'd just like to understand a little bit more about exactly what circumstance causes the packets to be duplicated? Is this only ICMP echo requests that are affected and does it happen at the point when the TTL decrements from one to zero or at some other time?
MAYNARD KOCH: The duplication does not happen when the TTL reaches zero. The duplication already happens, it does not happen at the beginning. We are not sure when it happens, maybe some queues or anything else is overflowing or full, and then there is a duplication process. So if you send these packets, you see that for the first time you only receive like these ICMP TTL messages, then they start how, after let's say X iterations, they start to get duplicated. And we don't know exactly the underlying issue. It seems like, so we got in contact with the SIP that contacted Juniper, and they say, they contacted their chip manufacturer, like, and they say it's, it looks like a hardware related issue somehow, so they won't fix it.
AUDIENCE SPEAKER: Do you know if it's only ICMP or if you send a UDP packet and that was looping would that be affected too?
MAYNARD KOCH: Currently we only think it's ICMP. But I did not have the time to verify that it might be also work with UDP or TCP, so, we need to check that.
RAYMOND JETTEN: So nobody at the mic any more. Alexander is asking "have you tried to identify the unique loops? I mean when the loops occur along the same path or at least try to aggregate it somehow? Because it is obvious that multiple /48 loops could be caused by a single misconfigure." He wants to add know note "from my experience because loops cause excessive traffic between the routers, so for a larger TTL the number of times the packet jumps between a miss configured router increases. The be probability of it being dropped, so when scanning with higher TTL, the probability so receive the line drops. Also some routers do not emit errors and you cannot get a reply with TTL, but will get a reply with TTL plus one."
MAYNARD KOCH: Again, you get a TTL plus one, what was the last part?
RAYMOND JETTEN: That was the last part.
MAYNARD KOCH: Okay. First, yes, only I think like 60,000 route IPs are affected, so of course, these do have more than one /48 looping sub‑net, so if you can fix like these, all these router IPs, that's ‑‑ you will also remove more looping subnets. And for the other part, yes, I think that might be an issue, but we have not looked into more detail into that currently.
RAYMOND JETTEN: Then, on the Andre.
AUDIENCE SPEAKER: Yes, hello. I am speaking with my IPv6 trainer hat. What time unit is TTL measured? Is it second, minutes years?
MAYNARD KOCH: What do you mean?
AUDIENCE SPEAKER: It's time to life, the TTL? Basically no ‑‑
MAYNARD KOCH: In IPv6, it's the hop limit, right.
AUDIENCE SPEAKER: Exactly. That was my point basically. The whole presentation it is about v6 and you talk about TTL which is a terminology from IPv4, because it makes much more sense, calling it hop limit there is no timing in this.
MAYNARD KOCH: Yes, thank you for pointing that out.
RAYMOND JETTEN: I am going to close the lines, because we ever only less than a minute. Gert Döring is commenting that I want to point out that BGP 38 validation on the ISP side... SIP and a customer just fine. So please deploy BGP 38 dot SC."
MAYNARD KOCH: That's exactly right. What we also would like to have. So use networking... you can also use reverse pass withering, these are all options that would stop these routing loops from emerge. It don't require to interact with the customers, so, there are some other options. Thanks for pointing that out.
CHRISTIAN SEITZ: Thank you. We have to switch the order of presentations because we are still waiting for Fernando to join Meetecho. So our next speaker is Ondrej Caletka who is talking about challenges of running IPv6‑only internal services.
ANDREJ CALETKA: Except that this unexpected changes are a little bit tricky for the tech support to get the right slides on the screen. So, going once, going twice...
.
And here they are!
.
Okay, I am from RIPE NCC and except for sometimes being a little bit picky about terminology I also really like IPv6, and I would like to tell you a story today. The story is about a failed attempt, so it's one of the stories that didn't end up well, but yeah, we still can, I hope, learn from it.
So, first of all, what are internal services? So, you can probably guess that something internal is something accessible only from internal, only for employees of the RIPE NCC in this case. Historically it was very easy, back in the old days there was an office, office has computers, if you somebody wanted to access internal services they just have to be in the in the office and use the computers there, then later, VPNs came, so you didn't have to be in the office but you still used the VPN to access the network of the office where the services were deployed. And then fast forward to now, where now internal services are even hosted somewhere in the Cloud on the Internet, and basically using VPN doesn't make much sense anymore, but you still can have to log in with some good way of proving that you are really only internal RIPE NCC employee to use them.
How does it work in the RIPE NCC? We moved about ten years ago to the new office at the central station of Amsterdam, and in that office, there is no more privileged network basically, if you go there and connect to the wi‑fi, all you will get will be Internet access, there is no special bring that you get just by being in the office. Every device that every employee has, has a VPN client that is always on and this VPN client is basically pushing some routes, it's split tunnelling so, but I pushing you some routes to the privileged networks of RIPE NCC, so we can access the internal services.
We use a split tunnel engine on purpose because of the office in Dubai and some remote workers, being in Dubai you would not be very happy to have everything going to Amsterdam and back.
Also of course the VPN supports IPv4 and IPv6, both for the tunnel itself but also inside the tunnel, so it's very safe to assume that every single employee that is IPv6 routes towards the private resources.
As I said, we are slowly ‑‑ well we are somehow also interacting with the Cloud, and for that, we are actually having this ongoing migration from on site LDap based login to something more secure, which is single sign on solution, because SSO assertion is much more secure than if you just type the same password to different services, especially if you use it with Cloud, would like that would be completely disaster to have a login in the Cloud.
Also one more they think. If service is running in the Cloud of course it's completely useless to use IP based authentication to figure out whether you are coming from specific network because the services is somewhere in the Cloud and there is public Internet in front of it and IP addresses are not encrypted or authenticated, so it doesn't make any sense.
So, seeing the situation, it's sort of logical that if we are going to build a new internal service using IPv6‑only makes very much more sense because everybody has a VPN, the VPN is pushing the routes for the v6 resources. No matter of your connectivity it's IPv4 or IPv6. And every employee has it on their computer. So what could possibly go wrong? If and here is where the story starts. This is the story of might have colleague from the software engineering, because he joined the RIPE NCC and he resource from his past that there was this T‑shirt that I have on me right now, it's somewhere from 2010, it's called IPv6 act now. And because he was eager to do things the right way, yeah, he just asked his co‑workers. So we have this new service, everybody has this IPv6 route, what if we do it IPv6‑only? And the response was: Well we never did it but well I guess so, we can do this. Let's do it.
So he started figuring out how to do it and he found some route logs in our configuration measurement system which expected that everything is dual stack or the a least has working in v4 and also our monitoring system requested that every single mission is IPv4 because all these could be fixed and have fixed actually, so it was just like a minor issue.
And then the first prototype was deployed, and he just sent message to the users of that new service and asked them for feedback, and first co‑worker in the office said yeah, it looks good, it works well. Then another one working from home said yes, it works well. And the third one says, it doesn't work at all. Why are you sending this? Basically, what the employee number 3 saw is like what you see on the screen‑shot on the right basically, "This site cannot be reached, name not resolved." So quite a sequel to the previous, Working Group yesterday. It's always DNS.
This is basically what happens. And you cannot probably already guess why the situation differed between employees and this is what the employee number 3 actually saw. So basically, you can ping 6, that service and you can see it will respond on ping so is normally works, then you try CURL and CURL will tell you it could not resolve host. It doesn't make sense, does it? Well it doesn't make sense, but now we at least know why it's caught stub resolvers are operating systems are trying to be too smart, and it the least Windows and macOS have this feature that the stub resolver tries to be smart whether it is worth to ask to AAAA queries or not. So, if macOS you can see by this output of this command, SCUTIL‑‑DNS, basically if you are connected to IPv4 only network, it will have just flag on the left, a request A records, whereas if you are connected to a dual stack network you will are receive a request A records request AAAA records. The problem is that fact that our VPN is pushing some specific routes through your operating does not conveyance the operating system resolver to start requesting AAAA records. And this is basically the end of story basically, we cannot force our resolver to resolve AAAA records if the main connectivity of that computer is IPv4 only. It is at least in macOS and Windows, I haven't found any clear documentation what are the rules for filtering those queries or not filtering them.
On macOS you can see that there is inconsistent behaviour depending on which kind of resolving API is used, that's the reason why ping 6 works, but Curl over the browser does not.
This blue thing on the slide is a link to some gist on GitHub where somebody elaborated a very long article on how to do some undocumented functions to make pack macOS resolved AAAA orders, even if your v6 connectivity is only provided by a VPN. I tried that and it didn't work for our setup, because of the split tunnelling, I guess. That's sort of not so happy ending of this story.
We had to roll back to dual stack, and so we have a dual stack service that is accessed over v6 for all the users that have a v6 at home and it's accessed by v4 for the users that have only v4, even though those users have also v6 reachability to it, the resolver will not send them to the v6 address.
But at least we have now prepared for IPv6‑only servers, so if we manage to fix or work armed this issue somehow, we might be able to finally go with IPv6‑only services at some point in time.
And by the way, we also identified additional road blocks that are not that obvious, but of course they are, they are there as well, and something probably has to be done with them as well. First of all our single sign on is called octet, this night right diagram shows how complicated DNS relationship between the domain rate ‑‑ and the IP address it is had missing something important, and this AAAA records, so, just like the standard situation in the Cloud and in the infrastructure of the service, basically they are IP for own, they haven't figured out that somebody would need IPv6. Actually we asked, if you Google IPv6 octet you actually find a message of my colleague who asked whether they are going to do IPv6 and got the answer that we should submit it as an idea that other people from using octet can share or vote on it so we submitted that idea and it got zero uploads. Nobody is using OKTA is caring about v6 apparently.
But, what changed about three weeks ago, is that OKTA actually reached back to us and they told us yeah, we are going to be first ones who is going to get IPv6 support. Not that because we care about you, but because we care about the US Government and according to the US Government, 50% of the government resources are supposed to be IPv6‑only already, and we are scared that at one point in time they will just try to replace OKTA with something else because we don't do v6. So basically, this was the finally sunk business case and they decided that the RIPE NCC would probably be the good customer to pilot this on, so we will watch this closely and maybe I can tell you in a year or so that he have with resolved this by asking them for implementing v6. Of they should pay us, I am not sure about this. We pay for this. And it's not the only bug.
The second one is something that falls a little bit between RIPE 772, you know this document that is defining the requirements for IPv6 requirement, it's same system VR system, you heard yesterday that part of critical infrastructure, VR requested ‑‑ requiring us to do security monitoring. And how we do it, we bought this software called inside idea from rapid 7, it has an agent on each server that is collecting loss, so it's not in anyway pat path of the packets. It's not like a network device but still this agent that we install there, even though it only can talk IPv4 over to it's backend. So, of course, the service would work even without it but in order not to ‑‑ well we have sort of policy that every server has to do it so we have a complete picture. That's another thing, again insight VM and this was the only relevant link that I find. Somebody asked in March 2024 and thereof complete silence. Of course our security department also reached out to them, and they were ‑‑ the story is more similar to OKTA, so, they at one hand don't really see an issue not supporting v6 but at the same time they are aware that this US Government thing might force them to do it, but well unfortunately I cannot share that there is some development from this area further.
.
So wrap this up. I was always wondering is this just us, like, no things that I present here is unique to RIPE NCC, I guess, other companies also have VPNs, they probably also have v6 and VPNs. They probably also use OKTA or something like that because we are not the only ones. And it's 2025, we cannot even say that we are early adapters. Are we? And according to this US Government mandate, it says that by the end of fiscal year 2024, which I guess already happened, it's like mid‑2025, at least 50% of government IP enabled assets should be IPv6‑only. So I wonder how they selected those 50% to make it, you know, still work if there are so many IPv4 only services. Anyway, we keep trying, we have our backend systems ready and we trying very hard to remove all the v4 dependences. Though of course, some products like SSO or SI EM, they are small markets, so either you recollect not find anything that would even support v6 or maybe you can tried the support for v6 for lack of much more other important features because in the end, even though I really care about v6, not the whole RIPE NCC cares about IPv6, let's say it like this.
So, some people just prefer more important features than IPv6 support.
And that's everything from me. Thank you very much and if you have any questions, I think we have still some time.
(Applause)
CHRISTIAN SEITZ: Thank you for sharing your experiences. Questions?
AUDIENCE SPEAKER: Jen Linkova. Thank you very much, very interesting, first of all a comment: Did you just said that some good could come out of US regulations?
ANDREJ CALETKA: It's from the previous government.
AUDIENCE SPEAKER: Whatever. So, actually, my actual question is: So if I understand correctly, part of the problem is when you turn on ‑‑ when your VPN is up you do not have a right DNS configuration right, if your VPN configure pushes DNS to the client your problem goes away, right or not?
ANDREJ CALETKA: Not really. It pushes the DNS route.
AUDIENCE SPEAKER: Not DNS route.
ANDREJ CALETKA: DNS server. You DNS server over VPN. Much it has IPv6 addresses even but still it's flagged like resolve ‑‑ well, we don't, to be precise, we don't push the, let's say the resolver of macOS is super complicated and it is doing some sort of domain name routing, so basically if you are connected to VPN, everything ending ripe.net is supposed to be resolved over the resolver that the behind the VPN over v6 and v4. The rest should keep being resolved by your normal resolver that you have.
AUDIENCE SPEAKER: That's what I'm saying. So I guess if you completely over right it and and it tell to resolve everything from the VPN resolver, then your problem goes away?
ANDREJ CALETKA: Maybe. I haven't tried that. It could be the case. Because the question is I don't, I am not really sure what kind of rules are there for this requesting AAAA records, like, I am only guessing that it's when you have default route, but I really don't know.
AUDIENCE SPEAKER: Because, just kind of speaking from experience, you can override it to like for example my always resolves through Google DNS, right, in v4 and v6, so, even on v4 only network I would have v6 addresses the SCUTIL, and that probably might be one of the approaches to fix that.
ANDREJ CALETKA: It's good to know, thank you very much. I was actually wanting to say something that I forgot. I am thinking of that maybe and idea document that would be called like further AAAA records consider helpful, would be helpful and I am trying to start with that one and I know that I trigger, Jen.
AUDIENCE SPEAKER: Actually I realised I forgot to ask you one thing. Could you please send a PR for v6 only operations considerations document talking about this, because we will forget.
ANDREJ CALETKA: Good to know, yes, thank you.
AUDIENCE SPEAKER: Hello. Thank you very much for this share of experience. And pretty much Jen covered me. I was asking, I was wondering whether you tried with a different VPN software, could it be perhaps that the VPN software does not interact correctly with the macOS underlying thing? And was it only on macOS that you saw this but also in Linux or Windows for example?
ANDREJ CALETKA: The second part. As far as I know Linux does not do any AAAA filtering, so they just request AAAA all the time so there shouldn't be an issue. We use only macOS in the RIPE NCC, so I really don't care about Windows but I am aware that Windows is doing sort of filtering as well. Regarding VPN software, of course this is quite tricky to just try because again our VPN software was chosen like ten years ago and it was exactly the case that basically we choose VPNs with the features that we want and IPv6 support and we ended up with just one product that we could have. So, that was like not really a choice basically if we wanted to have some IPv6 support back then and also the corporate features of VPNs. So, yeah, different VPNs are behaving differently, some of them are implementing that fix that I mentioned on the slide with this SCUTIL, magic, somehow makes it resolve AAAA, but yeah, it's trickily.
AUDIENCE SPEAKER: The only question is: Where do you actually complain? Do you complain to Apple or do you complain to the VPN provider?
ANDREJ CALETKA: I would say complain to am. If I'm operating a system and I have at least one route towards global Anycast IP range I should not be doing AAAA records because I never know what will be the result of query, maybe it will be inside the address range that I have routed for, therefore it's valuable. If I don't have any route towards IPv6 Anycast, then I think it's save to assume that you can filter AAAA records.
CHRISTIAN SEITZ: We now have to close the microphone queues, because we are already over time. Thank you.
(Applause)
.
And the next speaker is online.
FERNANDO GONT: I don't see the slide work. I guess I can control the slide work from here?
.
So, hello everyone. This presentation is going to be about IPv6 support for multiple routers, multiple interfaces and multiple prefixes, this is related to work that we are pursuing at the IETF.
So, let's start with a super brief introduction about what we refer to by multirouter, multiinterface or multiprefix switch. Throughout this presentation I will refer to as multiIPv6, just as a shorthand.
So essentially what we call multiIPv6 are scenarios where you have a combination of multiple routers, multiple interfaces and/or multiple prefixes. And these scenarios are common. Some examples could be a laptop that connects to the Internet via ethernet and wi‑fi interface. A mobile device that connects with 4G and wi‑fi, a small office that connects to the Internet via two different ISPs, two CPE routers, so this. These scenarios are common. Will however support for these scenarios has been very poor at at all present. So, what normally happens, for example, in the case of mobile devices, is that you actually employ only one connection at the time. So, even when, you know, you have like multiple interfaces, you are only using one single interface at a time.
And if you try to do something like connect a local network to two different ISPs, via two CPE routers, you'd be lucky if it actually works.
Obviously, from my perspective, and I believe this view is shared by many people, this warrants that support for these scenarios is improved in IPv6.
So, the best way to try to see or figure the challenges that are involved are to you will straight the super simple multiIPv6 scenario. What we have in this slide is essentially a local network, could you say like a switch that you have in there, and let's say that you have a small office and you want to do like a simple mulihoming setup, you want to connect your local network to two different upstream ISPs. What you do is you connect each of the CPEs to your local switch, know.
So, what would happen in that scenario is that of course each of the CPEs from each of the ISPs would start sending RAs with all the network configuration information. Of course we are assuming that Slack is being employed. And what will happen is that hosts, all of the hosts that are connected to the local network will essentially aggregate the information that has been advertised by these two routers. That's what, you know, in orange, at the bottom, and hopefully can be seen. So in a let's say in a single data structure, you get the information from all of the routers that are on that link.
Now, it's useful to consider, you know, what are some of the steps that take place when, for example, you want ‑‑ another thing like, in this scenario that you have in this slide, you see the two ISPs, each of the ISPs, they receive the DNS server and just for the sake of the example, they have a web cache and each of them are, you know, hosting an instance of the website, wwwexample.com.
So, it's useful to, you know, consider what are the steps that need to take place are from the perspective of the host before you can actually base it, for example, example.com in this scenario. So the first thing you have to do is pick one record in the DNS server, in this case you have two, you got one from one ISP, another from the other ISP, so once you have picked a records of DNS server, then you should decide what is the source address that you are going to use for your DNS queries, okay.
Once you have that, then you probably decide what is the next hop that you are going to identification to send your packets.
And assuming that DNS resolution, you know, you have resolved example.com into a AAAA, into an IPv6 address, then the next thing would be to select a source address that you would use to send the HTTP queries to the web server, and finally before the packets actually get there, you have to obviously decide what is the next hop that you are going to use.
Now, obviously when you have like a non‑multiIPv6 scenario, like simple scenario, just a single upstream router, normally you have these steps obviously, but, you know, there are no choices to be made because you normally have one single choice. Now the problem or the challenge here is that as a result of the multiIPv6 scenario, now you have like choices for many of the things. Choices in terms of the addresses, choices in terms of the next hop, etc., etc., etc.
So, now, this is kind of like a description. We have not yet got to the actual problem. But you might wonder okay, what is the problem, or what's the underlying thing that end up causing problems?
.
Well I believe the underlying reason is that, you know, Slack essentially operates on the premise that each router will advertise a network configuration information, that the information that advertises ‑‑ that they advertise just valid, it's valid for any system or, you know, through any router that is on your local network. Hosts are expected to aggregate this information, as I mentioned before, you don't keep router information but it's actually, let's say, global information within your system.
And essentially, you know, it's up to the host to figure how to use that information. So you get a lot of information from different systems, and you are supposed to figure out, you know, how to use that information in a way that works.
Now what happens in practice, is that that last part, using the information in a way that works, is not trivial.
So, I am going to cover a few scenarios where it becomes evident that things break. Will so, I will start with a simple scenario that I was describing before. Essentially we have a local network. You have attached two CPE routers to your local network, two different ISPs. You get configuration information from these routers and you put all the information, you aggregate all the information as a Slack, assumes that you would be doing.
So what are the some of the things that could happen?
One of the things that could happen is that for example you start sending packets using the address space from ISP_A, that is prefix A, through router B, which is the router that connects through the to Internet via ISP_B. If you employ or implement ingress filtering, it's super likely that your packets are going to be dropped, okay? Ore thing in that could happen is that, you know, since you have two DNS servers, because each ISP, you know, advertise their own, you decide to send queries to RDNS server B, that is the record DNS server hosted by server B, but using addresses from prefix A, if, let's say, ISP_B sends and open resolver, most likely they are in foresee knuckles so that they will only provide DNS server to the own clients, meaning you should be sending your queries from, you know, ISP_Bs address space.
Another thing that could happen is that for example, that, you know, when you get to resolve, or when you want to resolve example.com, you use DNS server B for resolving that domain name into an IP address. In this particular case, where we are saying that, you know, each ISP has their own copy of this website, if you use RDNS server B for resolving that name, that is the resulting IPv6 address is that going to be that of instance 2, that is the copy of example.com that is being hosted on ISP_B. Then you might end up sending your packets to the, using the address space of the other ISP, okay, so you resolve the name with the requisite DNS server of one of the ISPs, it maps that domain name to an address that is being hosted by that ISP, and they could have, as in the previous scenario, just owes, they only want to use their cache to serve their own customers, but if you end up sending the TCP queries with the address space of the other ISP, then your packets might simply get dropped. So, what is probably evident from all these discussions, is that at the end of the day, that of mixing up all the information that you receive from Slack routers will only work in super specific scenarios. But for the general case, actually you should use the information provided by your router only as a, let's say, atomic unit. So if you are going to use let's say RDNS server A for name resolution, then afterwards, everything you do for that, like what is the next hop that you use for the DNS queries, what is the source address that you use for the DNS queries, and also what is the IP address and next hop that you use for sending the HTTP requests should all belong to the same group of information in that case the one advertised by ISP_A.
So, how would, or what could we possibly do to, you know, start improving the situation?
.
One thing that the most basic thing that we could do is essentially say well rather than Slack host, like aggregating all the information and, you know, mixing up everything, we'll just keep the router state per how are information so you don't mix it up. That's the first step to do what I was describing before. So, now you have in the same package, in the same data structure, so to speak, you have the RDNS servers, the it next hops, the local prefixes and so on from one ISP and in a different structure, the other one.
.
Obviously this requires a lot of changes that I'm not going to go into detail because, you know, we would be running out of time, but obviously you could ‑‑ there is of course a modification to Slack that needs to be made. Obviously, you probably need to implement policy routing so that depending on, for example, the source address you use different next hops and so on. What may be more tricky is being able to handle the DNS part all together with the rest of the information, okay. Like, for example, making sure that if you use for name resolution the DNS server provided by ISP_A, that you also make sure that when you eventually use that IP address, for sending HTTP queries, you also use next hops and IP address space, IPv6 address space from the same ISPs. There are some things that, you know, might possibly do for that part. No one of them look like super trivial, but that's the problem that we have on the table I would say.
So, then we are now going to move to, you know, another scenario, and the previous scenario kind of like highlights the general things that we should do to solve this problem. But then there is another problem, which is that, for example, in the case where you have like multirouter, like in the same example as before, where you want to have two upstream routers so that if one goes down or there is a problem with the ISP, you end up using the other one. Well, you know, according to the current specifications, you might end up in a situation where you pick up, for example, something from the ISP_B even when your next hop is not reachable. Because reach reachability is not a factor when considering source address selection. So if you pick the source address, without knowing whether your next hop is reachable or not, then once you have picked the source address, obviously you are tied to the router that you should use based on the consideration that the previous slide, meaning that if you decide to use as a source an address from prefix B, then you would end up using router B, because it's the only router that, you know, can successfully route those packets. Whereas, if at the point of doing source address selection, you would have considered whether the next hop is reachable, you might have done a better choice.
One of the ways in which this could be improved is to add yet another rule to the, you know, source address selection RFC, 6724, so that's a problem that is like let's say more easy or easier to address.
Another scenario, and when you look at the numbers, you know, I might have realised that I kept scenario 3 and 4. This is because I am reflecting the scenarios that we have in our document that I will reference by the end of the presentation, so these are ‑‑ there is a discussion of these scenarios in the IETF draft that we produce.
The other scenario that we are going to consider is a scenario where you have your local network, you have let's say two different ISPs, or they could be like, you know, essentially two connections to the same ISP, and let's say that, you know, both of the CPEs advertise the same piece of information, okay, or at least some piece of information they advertise the same. That could be a record DNS server or it could be for example a prefix, it doesn't matter. The point here is that you have multiple powers on your local network advertising the same information.
Now, what would happen if, for example, one of these routers were to advertise that piece of information with a lifetime of zero? According to Slack, that would cause the information to be completely discarded and the reason is that according to Slack you have to kind of like aggregate all the information together. So you mix everything into the same thing. So long as a single router on your local network advertises a piece of Slack information with a lifetime of zero, that information should be removed, okay?
.
That's obviously bad from my perspective. Like the fact that for example router B starts advertising whether record DNS server or a prefix with a lifetime of zero should only affect you know the usage of that information with B or router B. But if from the perspective of router A, that the information being advertised is valid, then there is no reason for which you should discard that information.
So this is something that, you know, it's not supported today. It's broken. And if you were to maintain better route state rather than aggregating the entire thing, the advertisements from all routers rather than aggregating them, you could solve this problem.
So, ongoing work. What are the things that we are doing? A disclaimer: This is not the first time that somebody thinks about this scenario and this is not the first time that somebody tries to come up with a solution, okay?
.
What we tried to do in this particular case is do two things. First one, you know, document the problems, so to speak, but I mean the problem in a way is known. What we specifically wanted to do here is to define a specific set of cases that any solution in the space should address. Meaning if there are other problems that are not solved, okay, that's good enough. So, that might mean, or the goal here or the hope is that we might not get like a perfect solution that might solve all of the problems that would come to mind. You know, with multihoming and everything that you could think of, but we do come up with a solution that at the very least, you know, addresses and solves the scenarios that I was discussing throughout this presentation.
Those, let's say, test cases, are documenting ‑‑ are documented in the first draft, and there is a second document where we try to specify the protocol updates that would be necessary to actually address all of these scenarios.
So conclusions from my side. I believe it's well known that multiIPv6 scenarios are broken. This is something that everyone, whenever we get to talk to people about this, it's like yeah, well known. We know it's broken. But in a way we haven't yet figured out like a workable solution to, at the very least, address the low‑hanging fruit scenarios that I was describing throughout this presentation.
Obviously it's my belief that the support for multiIPv6 scenarios should be improved, not necessarily make it perfect but at least be able to address those scenarios, and the thing that should be noted here is that whether we like it or not, quite a bit of these problems essentially get off the table when you implement other solutions that probably in these Working Group we don't want to consider.
So that's if from my side. So, I guess we have sometime for comments and questions?
.
(Applause)
RAYMOND JETTEN: We have about a minute and a half. Thank you Fernando for joining us all the way from Argentina.
AUDIENCE SPEAKER: I will skip the part where your presentation is not quite right because there are two questions that I would really like to ask the room here.
The first one is, what do we actually need to support in terms of complexity of the network that has these multiple uplinks? In your slides you only have the two routers. But do we need to support this for small enterprise networks that have a few, like handful of routers that need to coordinate this between them? This is something that the IETF really needs input from operators on. I'm participating in this discussion that we can't answer that, and that's why I would like to pose this to the room.
The second question I would like to pose to the room is: What level of functionality do the operators and everyone here needs from the hosts in selecting from these multiple uplinks because that's also open work that we have not done and we need input here from operators and people running the actual networks. And if you can send an e‑mail to the IETF lists, you don't need to keep reading the IETF lists, just send a one shot male and forget about it later, but please participate in the IETF discussions about this, we need your input. Thank you.
RAYMOND JETTEN: Thanks for putting it to the mail list because we don't have time to do it here.
AUDIENCE SPEAKER: Jan Zorz. So, I was wondering, if you add to all these complexities also the problem that we are trying to solve when the operators are deciding to change the prefixes every 24 hours, how can this even be solved at the end of the day?
FERNANDO GONT: I say it's ‑‑ inaudible.
RAYMOND JETTEN: Okay. We really have to be short now.
AUDIENCE SPEAKER: Warren Kumari: Is it most of the solved provisioning domains and virtual sub‑interfaces?
FERNANDO GONT: Is it actually implemented and deployed? So, from what I have seen, like at least at home, not really.
AUDIENCE SPEAKER: Jen Linkova. A quick comment. We did have a discussion about every he RIR from I think single router SAPVD. If you have two routers you have get two implicit PVDs. However, that causes a problem if I want a device which, for example, does not want to be a full router but want to send RDNSS separately and I have another router which is a router but is not cable of sending that. So there are scenarios where it would not work as an implicit...
AUDIENCE SPEAKER: Rodrigo. In scenario 1, which OI system you use? Because I am one the users who has a problem, I have in quarantine, I have two ISP with a dot prefix IPv6, and what happened to me also is first come first used, that way. And after working, I tried to use both inaudible in the network, and that was difficult to do that.
So, in what operating you do in the first scenario you just described? Because you use one prefix and another router but in what operating system you do that?
FERNANDO GONT: So, I have checked like with multiple of them, like virtually all of them, and you know at the end of the day they end up doing what I describe, mix‑up all the information and you get lucky if they use all the information together. A lot of times what happens is that you just get one of the ISPs use and the other one is like forgotten completely.
RAYMOND JETTEN: Okay. Thank you all for your questions. And then we have our next talk.
ERIC VYNCKE: Thanks for the introduction. The two of us, both on the industry event, big one, and we are trying to replicate there what Ondrej has been doing here for many many years. So, industry event, one from Cisco, I work for Cisco and one from the IETF, warn will talk about it. A lot of slides are come from a colleague of mine with part of both the Cisco life NOG and the IETF NOG. I get sometime to prepare the slide I general enjoy playing with AI, and you see some very naive generated AI images there.
So, first AI generated. The goal is at the top of the hill, the mountain. IPv6‑only where the sun is shining. And you are coming from IPv4 only at the bottom. Typically you will go dual stack and IPv6 mostly. It's near the top. Not that I was able to fix the typos generated by AI, both dual stack and mostly misspelled.
WARREN KUMARI: It doesn't actually get to the top.
ERIC VYNCKE: It was my colleague, right, on this one. Anyway so you know about IPv6 mostly. It's basically two classes of hosts being there. One which is IPv6‑only capable, and they will not use any IPv4. That's the trick. And the other one, either IPv4 only or dual stack.
The goal basically for this step is to foster picks adoption. That's part of my role in Cisco to talk to customers, and going from IPv4 only to IPv6‑only, that's a no‑go, right? If we tell them dual stack is kind of complex, what we are pushing right now when you talk to customers it, your enterprise customers, big banks or whatever, is to go with IPv6 mostly, because that's doable basically.
So, another AI generated, if somebody in the room does not recognise nice herself, then I have failed. The goal is to reach IPv6‑only.
IS capable host, if you are doing this you rely on client level address translation, CLAT, and you receive the prefix. Unfortunately the best way to do it and there are documents in v6, to only use the RA recognition in the distribution of the v6 prefix for the NAT64 prefix to be honest. And those hosts can also decline to use IPv4 address by using option 108 in DHCP 4 request. If the DHCP server reply yes, you cannot use IPv4, then do it.
There is a draft in the IETF in v6 ops, again by the big players, right, so on the reg, we is here and Jen is here and Nick who lives in the US, is not here and not remote as far as I know.
The explorer is basically recognition for the guy, again AI generated, for the three people, you are there somewhere on the picture if you can recognise yourself. I cannot make you configurate rate, but again generic AI is just this. So Jen at Google, on the reg here, for a long time, for mostly three years, we have the benefit at RIPE of being tested and tester there.
But there is also another big deployment in a university in London, it was explained during the IPv6 Council of UK of last year, and it's about 20,000 students using IPv6 mostly. Which is cool. It's typically something you can deploy. But this was just a refresh. What you want to know is what we did in the industry event. Not industry, about you in both case with people like us in the room.
Cisco live is an event organised by Cisco. It took place in Amsterdam earlier this year. About 2,000 attendees, including possibly 3,000 from Cisco, but a lot of people like customers and so on. They go there for technical sessions, if you had been there normally you like them because it's mostly technical, there is also an exhibition and so on.
One key point: Usually for a network company like Cisco, it's pretty important to have a decent wi‑fi, right, because guess the brand of the wi‑fi access point? Guess the brand of the routers using in the premises. So it must be running, so we have a team that prepares this stage months in advance and they were able to change at some point in time there was a wi‑fi problem, they were able to change a lot of access point during the night. So there was a big, big thing. So, we prepared. But you will see we have at least one or two lessons learned on it even with the best preparation, there are some failures sometimes. Not big failures.
And with, I am pretty proceed of it to say default SSID is IPv6 mostly. Pretty much like here.
We have public IPv6 space, easy to get of course. And private IPv4, inside, right, over the wi‑fi.
The core team, most of them you know and roulette in the middle, that's a colleague of mine, normally he is online as well, so, that's it.
Edge design. So between basically the Internet and all the wi‑fi on campus thing, two cores, right. We are also using SA 1 K, both for doing the NAT44 and the NAT64, and you will see there was something that was kind of a small issue there and you get the bandwidth on the the bottom right. NAT64 design, not a big surprise. NAT64 translator, as I said, AS1 K, we have a DNS64, we are using umbrella, umbrella is basically OpenDNS making a product. And you are using CLAT and DHCP on the DHCP server. And we are using the well known prefix for NAT64.
So, nothing very specific here.
Two minor problems. We were planning for failures. We were not planning for success. Right. The NAT64 pool was exhausted too quickly. This is a nice thing, right. So I would not say it's a failure or not. But it was a small issue. So, they change the timer, increased the pool and so on, so everything was fixed overnight. This was working nice.
Also a minor problem is that some technical session included live demo. So, all we do a life demo, we connect to a VPN tunnel to a data centre in California. And pretty much like Ondrej explained about the DNS thing, some splitting configuration because each and every demonstration was using a different kind of split tunnelling policies where badly configured on IPv6 mostly when you are running on a Mac, on a Mac typically you can use PIC mostly. So, are at least we got a failure plan, and you can connect to Cisco live dash legacy, then you find a normal dual stack network and everything was fine.
That's basically the two issues we got.
Numbers:
Remember about 20,000 people but mean team like me needing three addresses, one for my Android phone, one for iPad and one for my laptop, right. So the numbers of people connected based on the MAC address. 20,000 people, devices actually, used IPv6 mostly, and 24,000 were using dual stack. We all know that's the major operating systems, that's not suppose IPv6‑mostly. I guess this is mostly those ones, okay.
This is last year 2024 numbers. The top one is the traffic in IPv4, and the bottom one is IPv6. So, last year, in 2024, we went about 44% of v6 traffic over the wi‑fi. We were kind of really happy with this. But this year, 44 terabytes of IPv4, and 45 terabytes of IPv6. So, slightly more than 50% with normal user, right, using normal applications. Which is pretty cool.
Of course, my feeling as well is that most of us now are using zero trust networking, so or using for instance the big e‑mail provider that are v6 capable, or VPNs are also IPv6 capable, so that's cool.
And in total, over the connection, this time not over the wi‑fi but on the link just to give you an idea, is also about 90 terabytes about. But this was just one event right, we intend to do it moron more. Another big event that I like as well...
WARREN KUMARI: So, I am Warren, I am one the IETF NOG volunteers. A little bit of background, we are almost entirely a volunteer driven NOG. We do three meetings a year, North America, Asia and Europe and we have between 720 and 900 people at the event. For a very long time we have been doing a default SSID, which is dual stack, but we have wanted to do an IPv6‑mostly solution for a long time. It's taken us longer than we would like ‑‑ to some extent it's because a lot of us are volunteers. We also have most of our equipment donated and so we needed to replace Juniper routers with firewalls to do the NAT64 functionality.
But at the meeting before last in Dublin, we did an initial test and had some working IPv6‑mostly, and then we decided in Bangkok we would do a much larger test. The way that we generally build this is the Tuesday or Wednesday before the meeting, we come along and we build and set up the network, and that's what we did, and IPv6‑mostly was just a small slice of this, but we tested it and everything seemed to be working fine. And then on the Sunday evening before the meeting started, one of the NOG volunteers connected and their device wouldn't actually connect to v6 mostly, and what we discovered was because Apples do the IPv6‑only stuff really well, we hadn't actually built an IPv6‑mostly network, we had built an IPv6‑only network, we just hadn't noticed.
It turns out we weren't forwarding the DHCP discovers and so we fix that had on Monday morning. We needed to add this particular bit to the NAT config, it shouldn't be needed to make the DHCP discovers work, but it was needed, it hadn't been needed in previous testing, but anyway, we did that. Asked a bunch of people to connect. We had some fairly good usage. Fairly much everybody who used it was happy with it. And so our plan for the upcoming meeting in Madrid is to make the default network IPv6‑mostly, and hopefully everything will be good. We did notice a few small issues. Some people who have sort of corporate protection software on their laptops, don't seem to be able to load images on Slack. We also discovered weirdness with the Mac native version of SSH, if you have both an A and a AAAA record and you try an force SSH to use IPv4, it doesn't actually work.
But we believe that that will be fixed that in the next macOS, one avenue a little ‑‑ next slides. Tommy Pauley from Apple managed to fix the weird SSH native MacOS client, and I believe that that is questions.
ERIC VYNCKE: So, summary:
If you pay attention, everyone can deploy IPv6‑mostly. It's not that hard. There is some minor differences, so you need to learn, we have learned. Some issue in limitation. Some split tunnel policies. Slack, as you just explain. Some of our device measurement also prevent to us auto the right DNS64, it doesn't work too much. And we know we are all of us are expecting to get a new quietened owes release, so we can make this work at 100%, that would be pretty tool. And nice successful AI generated image, right, this one.
RAYMOND JETTEN: There are questions from David.
AUDIENCE SPEAKER: We deployed this during the event. The hackers event. Usage of v4 addresses dropped to 25%, so 75% of devices did not request an IPv4 address anymore. Why did we deploy this during the event? Our authors did not support RFC 9047 for VPN enabled discovery. If you try this and use a VPN, make sure your routers do 9047. It was very painful.
ERIC VYNCKE: Thank you. Do you have any idea why only 25% were using IPv4? Did the host orch S?
AUDIENCE SPEAKER: This is a hacker event, there is a lot of weird devices, so I would assume that 75% is all like Apple phones, Android phones and normal laptops and 25% is like an ESP 8266 door bell, you know, that kind much thing.
ERIC VYNCKE: I was guessing the same. Thank you.
AUDIENCE SPEAKER: Jen Linkova. Thank you very much for telling people that actually it works. I'd like to go back to this amazing story of fixing SSH on MacOS, because I was the troubleshooting that stuff. I can't even imagine how we could have fixed that other way, because the right people in the room, we spent like a couple of hours looking that we could, in one hour it was a co‑sent out for review and it was quite tricky hidden deep inside, like library system. That's very important to do that kind of stuff in the places when all the right people are present before we actually going out. I got ‑‑ we just got it it recorded that we go... you kind of commit it, thank you warn.
WARREN KUMARI: Hoping for. This was a really good example. What it was was is we had some weird geeky people who had weird SSH configs that probably people in Apple would never have seen. We ran into this thing at the IETF, we happened to have Tommy Pauley who is one the kernel people who works on the Apple stack, we got those people together because we did v6‑mostly at an industry event, found the issue, fixed the issue, you know, Tommy had his editor open, we fixed it while people were standing around, and hopefully in the next version of MacOS it will fixed in production.
AUDIENCE SPEAKER: Brian Candler. So, warn I notice on your slides you show DNS64 and there is DNS 64 running on the IPv6‑mostly here. Is that still required? So anybody who gives option 108 do you still need to vague DNS responses because it has caused me a few problems. I need the real IPv4 address not a fake IPv6 for my VPNs.
JEN LINKOVA: Yes, it depends. We tried are ‑‑ like a plate of operating systems like which uses option 108 would be happy with prefix 64 being advertised. What would be broken is devices like, which do not process prefix 64, namely currently I believe if you take Chromebook and enable option 108, it would not take prefix 64. So, Apple and Android would be happy, or if you have for example Linux laptop and all your applications are happy with v6 only but you do not have listen ‑‑ sorry, basically it would mostly work and we have a document actually v6 currently which emphasises, please please if you develop your stuff on the end host, please take prefix 64.
AUDIENCE SPEAKER: If it's windows for example it's going to ignore the prefix 64.
RAYMOND JETTEN: I am very sorry, Ondrej, but we're over time already, we're in the break.
AUDIENCE SPEAKER: I have basically many comments. Thank you very much for your presentation. I am really delighted that we are not the weird RIPE meeting networks, that other vents are doing the same thing and people cannot explain it's your broken meeting network, it's also broken at Cisco as well it's good know. It actually works. My comment was at the RIPE meeting it works of course it's smaller than Cisco, we have about 40 K NAT64 sessions as a maximum of the week. Which means it will easily work with single IPv4 address. So, I believe that final run out of v4 addresses in your NAT64 pool probably speaks something about the quality of the NAT64 that is probably refuse too much IPv4 addresses engine.
WARREN KUMARI: Just as a very last comment. Thank you very much to RIPE as being the leaders in this and being the first big conference to have done it.
RAYMOND JETTEN: It's time for coffee. Get out of here.
Somebody has found a necklace, if you are missing one, go to the desk. Thanks, and remember to rate the talks.
Coffee break.