RIPE 90

Daily Archives

RIPE 90
Database Working Group
Main room
15 May 2025
11 a.m.


WILLIAM SYLVESTER: Good morning. Welcome to another Database Working Group. We have a really great setup today, we have obviously our updates from the RIPE NCC and Ed. We have some updates also on our authentication. And then we have some NWIs, which we have a few things going on from that perspective, as well as we'll jump in some other business that we have opened up.

So is everybody awake this morning? I think we're good.
.
With that, we'll kick this off, Ed with our operational update.

ED SHRYANE: Good morning. Thank you William.

For anyone who doesn't know me, I am Edward Shryane I work as a product owner on the database team for the RIPE NCC. Here is the operational update for the RIPE Database.

So in this update I'll be going through progress thins the last RIPE meeting, planned work between now and the next meeting and I also have two proposals to share with you.

Here is the rest of the team. Just to acknowledge their hard work since the last RIPE meeting, that's gone into the update.

Progress:
We have had four Whois releases. First in January to add API key authentication as an alternative to MD5 hash password that we're phasing out. We have had added an NRTMv4 client so complement the server we're running. We got feedback from the Working Group to extend the length of an e‑mail address the maximum allowed for the RFC.

In February we fixed a small bug parsing API key details. In March, we implemented relation searches in RDAP, which is the equivalent of hierarchical queries in Whois. So that closed a gap between the two protocols.

We added a warning if mbrs‑by‑ref is changed on the set it can break your update in ASNs and there can be a gap between the two, it's hard to figure out what happened. If your ASN is a member of a set, and you change the mbrs‑by‑ref, you can break the authentication there, there is a warning in the response when the set is changed.

Lastly in April, Whois 1.117, we made some changes to the inetnum status hierarchy validation and some other improvements to version 4.

To be some more specific by the inetnum hierarchy changes. We do with the help of registry team, we look at the inetnum and inet6num status hierarchies in the database, we made changes to align with the policy text. This is our interpretation of what the text says. There is now only one level of AGGREGATED‑BY‑LIR allowed, to be consistent with assignments. LIR partitioned PA, which is mentioned for the LIRs to divide their internal network. It doesn't make sense to allow that under supply allocated PA which is intended for third parties.

Finally, not to allow LIR partition PA under assigned PA, it doesn't make sense to the assignment. It should be the very bottom of the tree.

It does affect a handful of data in the database. The registry team are following up and working on that.
And just to be clear that the Whois implementation, it's very important that it complies with the policy text because you users will take it for granted that if Whois allows something, then it must be right.

But if you do notice an inconsistency, please let us know directly or by the Working Group, and Denis has started a discussion on the list recently, so, if you are interested, please follow up there.

Unfortunately, there is three Whois outages since the last meeting, and apologies for that. In March, there was an interruption to mill updates. We made a Marie dB update to the latest long term support version. We dropped some mail messages. The script that we're using on the command line was returning an error, but it meant that mail messages were not being put into the database and were dropped. We're going to fix that by replacing the command line script with integrated SMTP handling, so each of the mail servers will talk directly to Whois and this hope flow will pay off in the future as well when we run Whois as a managed service, we can use ICMP there too.

In February, there was a mailing to the list about an increase in domain update, with long running DNS tests. We believe that was a spike at the time but it's, it's been happening consistently since then. We have made some, we have tried to address the capacity issues by doubling the worker pool in the backend. Zonemaster is responsible for verifying the, validating the domain DNS information, the domain object. Double the worker pool. We made a small change to the web application, so we wait long enough for Whois to tell you what exactly happened, so, it's a bit more meaningful, the error message. We are also planning to wait for the maximum time for Zonemaster to return. But still, there are some open issues we want to make the queueing a bit more fair and also drop long running tests if you are waiting for a long time, there is no point continuing to wait. So we're talking about the DNS team to, on those improvement.

Finally, we do have switchover of the primary database in production every month to patch servers and also to test the, for disaster recovery. Unfortunately, in March, we had a few minutes of down time for update and we're looking into the root cause.

Statistics on Whois. Nearly 1,000 allocated assigned PA inetnums, in particular 5 hundred out of 12,000 /24 allocations, so it seems to be doing its job, seems to be a good up take of that new status.

There is 84,000 abuse‑c addresses. We're a bit behind on validations, we need to validate all of those every year. We have already increased the rate and we are working with the registry team on getting on track to do that.

And just over 10% of LIR organisations are synchronising their users to the maintainer, so you don't have to maintain your users in two places, a useful feature.

We are, we reached a milestone in the NONAUTH database, we're finally below 50,000 objects, there was an 11% decrease in the past year, so slowly decreasing.

Just over 100 inetnums with AGGREGATED‑BY‑LIR, that was introduced last year, the intention here with the status is to reduce require efforts the in the registration and maintain. It's a useful status that you can aggregate assignments, especially given there is more than 4 million assignment in the database, this seems like a good way to do some ‑‑ to reduce that number.

Finally, we reduced the number of undeliverable e‑mail addresses in the database. We are dealing with soft bounces where the mail delivery failed temporarily, we now re‑try those after a day. And also, if an e‑mail address is undeliverable but it's no longer researched in the database, then there is no point in keeping that recorded as undeliverable, so we have also removed that. And there is an ongoing effort across the company to deal with undeliverable e‑mail addresses, so we will be seeing some improvements. We are working on it, not just in Whois.

There was some refinements to the acceptable use policy. There was a discussion last year about how we implemented the daily limit, according to the acceptable use policy, an that was clarified, we got Board approval to make some changes. There is also some minor changes that we also made at the same time also to clarify the text of the acceptable use policy. It's now live on the website. Because our documentation repository is public, you can do a DIF on the changes as well. They are all listed there.

We add add feedback questionnaire. This is something that's already live in the RPKI dashboard. We're also adding into the query page. We want to hear your feedback. So we will ask ‑‑ there is a feedback form, ask a couple of questions, you can rate us out of 10 and there is space to put some text in there too to give us some feedback.

We're only showing it to logged in users, we don't keep your e‑mail address, it's anonymous, and we will be aggregating the source across the company, will be feeding those into the company scores, the MPS scores that Hans Petter Holen was talking about yesterday. And feedback, we are very interested in feedback to improve the service. So you don't need to wait for a RIPE meeting to let us know how we are doing.

API keys.
Very important alternative to authenticate with. It's available since January. You can equate them using your RIPE NCC access account. There is a new menu on the left hand side called API keys, allows you to create, list and revoke keys, to authenticate updates. You link the API key to the maintainer, using your SSO attribute. You can use HTTP basic authentication or use a password. And it's restricting the environment they are creating. You can do this in text, RC and training also, and the keys you create there won't be usable in production.

We do notify you before and after the key expires. And we have been, we have created extensive amounts of documentation. There is a Labs article, we have integrated it into the database documentation. And the learning and development colleagues have created a micro‑learning courses on how to use API keys.

Upcoming changes. This is provisional and I am looking for feedback and I'd be happy to make changes according to feedback and the preference of the Working Group.

So, most important. We are getting rid of MD5 hashed passwords. We published an updated timeline in he be if. Now we are working on removing inactive passwords, that is password that haven't been used since the beginning of 2024. About 17,000 maintainers are affected. We will start next week in processing those. We have split those into four batches. We will let those maintainers know in advance this is happening. Give them an opportunity to remove the password or switch to a replacement.

About 2,000 of those have no alternative, if they do not take action they will be locked out of their maintainer, so in order to deal with that we have improved the forgot maintainer password process. Two thirds of requests to regain access to maintainer are already successful but we want to reduce the load on our maintainers and on the registry team, so we have made some improvement in that process to increase that.

After the RIPE meeting, we'll be working on inactive passwords, there are about 2,000 maintainers using passwords actively, and that accounts for just over a third of all update. So it's important to give this group as much notice as possible. So ever a the RIPE meeting, we'll be e‑mailing them also. And letting them know that passwords are going away and they need to switch to an alternative. There are some related changes. We won't be asking for password. We will be removing password for mail update and dropping HTTP from sync updates.

Just a table of progress so far this year. Authentication methods in use. A couple of things to note. Password authentication is not really going away. Still stubborn Lai Yi. Just over a third of all updates. Interestingly, client certificates have made a real, there's been a real increase in the uptake of client certificates. Unfortunately API key usage is remaining about 1%, there is some work to be done to encourage adoption of alternatives before passwords goes away.
.
Auth 2 support. In the database team we are integrating support for OAuth 2.0 into Whois.

For RDAP, as I have said we completed a relative search. We're going to work on reverse search. This maps to inverse queries in Whois, so that will be now available, something similar will be available in RDAP. We also want to work on bulk RDAP. So this is ‑‑ we'll be providing a database dump, a daily database dump in JSON format of objects in RDAP format, and the advantage is that it will be completely RDAP in JSON. And an alternative to using NR at the moment version 4 or the existing database dumps.

Transparency report. There was a request last year to report on the gaps between RDAP and Whois and we found the biggest gap is lack of support for IRR types, and we are investigating with the other RIRs how to close a gap there, but it's nothing to report yet.

Administrative status: This is an issue with our implementation for a while. It's listed on the repository.
For lookups in region but not allocated. So they are reserved in extended delegates stats. We now will be returning the parent IANA allocation /8 with status administrative rather than not found.

And finally, there is another scenario where you may still get a not found in your RDAP responses for resources transferred out of region, you may still get a 404 there, because the other regions delegated stats have not been updated yet. Right now we are only updating that list once a day but we are going to move to checking hourly, so the other RIRs update their stats at different times of the day, so we're going to be checking for frequently to pick those up as soon as they are available.

NRTMv4 mirroring, I mentioned we implemented a client. We consider it as feature complete. It's been running in production since 2023. We also implemented an implementation report for the IETF, and the draft RFC text is in last call, there's been good progress there. One upcoming change is we are going to add CDN support because we expect increased up take of this protocol. We will need a CDN because of some of the resources are quite big, the snapshot in particular. The upcoming IRRd software will support NRTMv4, but you need to make a choice.

One thing we have wanted to do for quite a rightly is not to apply the daily limit to your own data, it make sense. User should not be accounted for their own data. If you are logged into SSO or authenticated using an API key, you can ‑‑ we know who you are, so we will not account for your own personal data against the daily limit. It's supported by the REST API, RDAP and the database query page. If feels like an obvious thing to do but in reality, it's not going to help in a lot of case because most of the queries are unauthenticated on port 43, but this is a reason to move to using authentication for queries.

The activity plan, just to mention, our commitments for this year. There is a couple of things there. I think I have mentioned everything. But it looks like we are on track and we are addressing the commitments that we made for this year.
So, finally, I have six minutes left, I just want to talk quickly about two proposals.

Firstly, in looking at the statistics for the NONAUTH database, there is ‑‑ it's been around since 2018. It contains non‑authoritative data that were created without authenticating against any resources in the database. It contains a lot of aut‑num objects, AS, route, Route‑6. No new objects can be added. Very few update every year,ity slowly decreased in size but it's not going away by itself.

So, I'd like to ask a question: Is anyone still depending on the data in the NONAUTH database? The objects are returned for default so the NONAUTH database is there for queries responses, it will be checked if you are querying directly for the key you will get that data. But also if you query for a prefix for a resource, if there is an associated NONAUTH inaudible you will still get it. But counting the query responses, you are very, very unlikely to see an object from the NONAUTH database unless you are looking for it. We have separate database dumps and split files nor NONAUTH. We have a separate NRTM source. Finally, ARIN retired their NONAUTH database in 2002. I'd like to ask the community is it time to retire RIPE NONAUTH or is it a bit premature?

So I'd appreciate your feedback on that. Constantly, the UTF will, I represented this at RIPE 89. Thank you for your feedback. The response from the community seemed to prioritise, the most frequent feedback was that to focus on a subset of UTF‑8 to pay attention to normalisation, and also human interoperability is very important. So, in supporting UTF‑8, we don't want to make it harder to interoperate using the database.

There has been a lot of history around internationalisation in the database, and I'd just like to point out that clearly over the years there has been a lot of interest in it. We haven't made a lot of progress, but I believe we can learn from the feedback that we got in the past and whatever solution we come up with needs to take this history into account, I believe.

And finally, the proposal I'd like to propose that we allow UTF‑8 somewhere, and to begin with in description in remarks attributes. I promised a problem statement at the last RIPE meeting and this is something that I will publish after this RIPE meeting. The functional value of this change would be to allow operators to add localised messages in their own language for their region. The technical reason would be to put UTF‑8 somewhere in the database, but at a minimal scope and impact for the RIPE NCC and also for the community.

We want to deliver something that's of value to our users. And we want to gain some experience with UTF‑8.
And to stress, names and addresses are not affected. We do not want to touch that yet. It's going to require a lot of work. And specifically, it's not intended to internationalise personal data. You should already not be putting personal data into description remarks attributes because we're not accounting for that in the daily limit, and it's not filtered in responses.

We do not want to impact on the RIPE NCC operations. So, propose a review before we make any changes internally. And we don't expect an impact on database functionality, but this is something for discussion.

As I said, I'd like to publish a problem statement on this for discussion, and leave future work to support other data types or attributes. So, I'd welcome any feedback.

So that's it. Thank you for your time. Questions and comments.

(Applause)

I don't see anybody. Before I disappear.

AUDIENCE SPEAKER: Kenji Shioda, online. You were explaining ‑‑ you were explaining a lot of reasons to discard the NONAUTH database. Can you think of any reasons to keep it? Thank you.

ED SHRYANE: That's a good question, I don't think it's for me to say, there is an overhead in keeping it, so there are reasons to get rid of it as you said. But I'd like to know if anyone is depending on it to come forward and let us know. It's already been retired from the ARIN region, seemingly without impact. So, it would be good to know before we do similar, what the impact could be in this community. Thanks for the question.

RUDIGER VOLK: I was a former user of this. Since the ARIN only had very recently introduced through RPSL IRR kind of their NONAUTH also was really recent, so kind of that, that is very short‑term temporary thing I think does not predict anything about the longstanding data that has been split out into RIPE NON‑AUTH, and since I am not in the operations anymore, I cannot tell whether the customer relations that were actually meaning that NON‑AUTH was a usable database is still in place. RIPE NCC has a tradition of looking at share data and actually analyse whether something that's supposed to go away is still being used and to which extent. I would suggest to consider what you can do in that direction, or else I would see a pretty big danger that someone is actually depending on it and will have ‑‑ will not kind of note that you are asking well is anybody using it?

ED SHRYANE: Thank you for the context, Rudiger, I think it's very important we can only see from the query rate how much the data is being returned. But there is a lot of data there for a long time. It's not going away by itself, so we do need to be very careful before we take action.

RUDIGER VOLK: If ‑‑ and I don't remember whether you decided to deliver the NON‑AUTH objects by default when querying the database. It probably would be a useful thing to not deliver by default. If users have to actually ask for it, you would see that. I am sure the tools that I left behind that probably are used in some way still, actually explicitly decided per customer relation whether that customer relation was using the NON‑AUTH or not. And you would see that in your logs.

ED SHRYANE: Yeah, we could certainly do that. One change we could make is not return the related route object, for example, something the user may not have asked for or want, but it means then there is a behaviour change between the RIPE Database and the NON‑AUTH database. So, something to be considered. Thank you.

AUDIENCE SPEAKER: We are talking about there are other access services, and we have this problem in a more global way because the structure, or the idea behind the service is to centralise data for allowing bulk access, so we are moving data out of the original legislation area and putting it another one, and it's quite difficult. We have the difficulty in ICANN that we can't introduce a system, like IANA is doing, the customer refers to the contract they have and say here you have the next server, please have a look there. It works much better.
It's possible for mid‑term consideration, not short‑term. It would be fine if RIPE is offering for their region because we have also different legislation areas here, to offer that we can store some information in delegated form that we can show that some distributed or radical database is possible and it will influence discussion about the Whois information or the registration data information at ICANN level.

ED SHRYANE: Okay, if there is no more questions I'll leave it at that.

DAVID TATILSU: We have our next speaker up now.

ADONIS STERGIOPOULOS: Good morning everyone. I work for the RIPE NCC. I am part of the software engineering team and today, I want to talk to you about OAuth, there was a slide mentioned yesterday at the NCC Services Working Group and I just mentioned it, so let's see what OAuth is.

So, auth stands for author organisation and allows certain parties to access and manage resources on behalf of other users without exposing their credentials. You probably have used OAuth in some sort of way yourself with or without realising. An example is if you are trying to use Spotify and then you don't use your user name and password or create an account but you use near provider, so you might use Google in order to use Spotify, but you are actually your credentials are with Google and never leave their environment.

So, this is standard and since, as Ed mentioned, since this year, and with the deprecation of passwords, we offer the use of API keys, and as an alternative to that, and after discussions in the mailing list and with speaking with members of the community, we want also to enable an OAuth solution. We see this as an alternative to API keys, we see those two services living next to each other and they are not a replacement.

So, the difference between API keys and OAuth. First, with API keys, a user generates a key themselves and then they share it, and this may have some problems if it's shared not with authorised parties or with any other means. But with OAuth, this process is done through an authorisation flow, so no actual ‑‑ so it's less prone to this type of errors.

When it comes to application identity, with OAuth, it's client app gets a client ID, a potentially client secret, so we know who is accessing what, so there is an audit trail, but with API keys, there is no such building mechanism to see who is doing which.

With scope, both API keys and OAuth, they have some fine grain permissions, such as read only or write for specific parts of the RIPE database, and in terms of lifetime, API keys can be configurable up to one year, and with OAuth, we use tokens, there is an access tone that has a lifetime for, it can have a lifetime of up to one hour, and a review token of up to one year.

A little bit more on auth. It works with authorisation flows. So this is the process that a client app obtains authorisation from a user to access the protected data on a resource server. So what that means in the case of the RIPE Database, is that a client app is the programmes that the client app takes to access and edit the data of the RIPE Database on behalf of a user.

So, in the past few months, we have conducted some user interviews to see what members of the community would like from an OAuth solution, and they expressed that they would like to authenticate on behalf of other users, they would like to make changes through web application or they would like support from for simple command line IP scripts, they would like minimal user inter vehicles and having different scopes. The scopes are the different permissions.

And based on these requirement, we looked at the authorisation flows that are available, and I'm going to go on each one in more detail. So, the first one on the left is the authorisation code flow with PKCE. So this is recommended for web application, mobile applications and single page applications. This is where a user logs into a secure brother redirect and the app exchanges a special code, for access and refresh tokens. This flow is also protected against code inception attacks via PKCE. This means that in sort the application that starts the authentication flow is the same one that finishes it.

Then the second one is the device code flow, and it's designed for tools or service that is don't necessarily have a browser. And the app gives a specific URL and then a code, and then the user has to use another device to authenticate and grant access.

There are some security concerns with this option, but one of the good things, some might say it's a bad thing, you can't authenticate on behalf of other users. In example you have seen it more on TVs for example, where you are trying to log in and you have to use your phone or another device to do so.

Then, the third flow, the client credentials flow token exchange is actually machine to machine communication. The app authenticates itself and requests a token, and that token is then exchanged with a token with different permissions. So this flow has actually a lot of flexibility and you can, it can be set up in a way that the user can have edit or write access with different policies, but this flexibility comes with sort of costs because it requires quite some development work from your end and also from the RIPE NCC. When looking at this free flows, the authorisation code flow with PKCE and the device code flow work out of the box with Keycloak that we use, but then the client credential flow will be something that has to be made out of scratch.

So, based on the requirements that I mentioned in the previous slide, we matched the different flows and all of them can work. So, all of them can be adopted but of course there is pros and cons to doing each one. From our standpoint and seeing what we have had in Open House last month and also we have done some demos here and had conversations, we are thinking of starting with the authorisation code flow with PKCE as the first solution, because it offers interactive and on, offline use, the sessions can be app specific or it can be for all RIPE NCC apps. It's quite easy from a user point of view view because they log into a portal and then they authorise, they provide their credentials and then they authorise the client app to make changes on their behalf. This is, for example, I have a preview of how this authorisation window can look like.

In terms of development, as I said, this comes with Keycloak, so it sort of works out of the box, and we also want to offer the support for command line scripts. And in terms of security, it has an additional layer by using PKCE. We can go ahead at the end of the year and offer it to use. I am going to go into it in a little bit more detail about this one.
.
In terms of how it usually works, you have an end user that initiates the process and they visit a client app that. App needs to develop by you. And if they are not logged in to this app, then they will need to go to the authorisation server of the RIPE NCC, and then we click on the button as I showed earlier, where they have to log in with the RIPE NCC access account. They will add the user name, password and go through the 2FA process, and then they will have to give consent to the client app to perform their actions. Of course, if they deny, then this flow will stop.

So, after they give their consent, the authorisation server checks if the authorisation went well, and then a two‑way interaction starts between the client application and the authorisation server, and authorisation is sent back and forward, some PKCE secrets are exchanged. If everything is found by secure by Keycloak, the authorisation server will return an access token and a refresh token that be stored in the application. And I am going to explain more about these tokens in a minute.

So, when an access token is expired by using the refresh token stored in the client application, you can request for a new access token. And if you do have a valid access token what that means is you can call any of the RIPE dB API end point, Provide the access token and TCP header, then you'll get a response for this.
.
So, with OAuth, there are two main types of token. The first one which is default is the access token. This is a short‑lived credential that grants access to protected resources. It's sent with each API request, and the proposed lifetime for us is one hour. Then we also want to enable a refresh token. So, it's also called an offline token, and it's a long lived credentials that is used pretty much to get another access token. This is stored in a safe location instead, and it's in the client application that I explained earlier in the architecture. And the proposed lifetime of this is 365 days, and also this matches the API keys lifetime.

When it comes to the expiration time of these tokens, there is no standard. It's always a trade‑off between security and usability. If there is a short expiry, that's high security, so it can be that a stolen token can be used for a short period of time. The CONS is that well if there is more tokens, then there is more network traffic to generate those access tokens. 

On the other hand, if there is a long expiry, it's better for the users because they are logged in for an extended period of time, reducing the risk of disruptions, but again that comes with a cost, because if a token is stolen, then it can be used longer around it's harder to detect that it's stolen.

So, we would like your input on to still gather requirement and see if this solution would work for you, not just the authentication code flow, but an OAuth solution all together. We are currently looking for volunteers for the first flow, because that's almost ready to go and we would like some testers. So please get in touch with us. But we still want feedback for the rest of the authorisation flows. So, please get in touch with us.

We are also doing some demos this week, so you can book a demo here in Lisbon so we can show you a client app, like a dummy client app that we have created and how that can work. Or, from next week, we can can do this online, you can scan the URL, and book a slot.

So, any questions or comments?

PETER HESSLER: Please come up if you want to make a comment or a have question, state your name and affiliation. I see everyone refresh your recollection. It sound like it was very clear and everyone understood perfectly. Thank you.

(Applause)
.
Hello everyone. I am going to go through the NWI review real quick. So, as you may remember, last year during the summer, we started reviewing the existing NWIs that we had. There was a discussion around NWI 2 and NWI 17, both of those were on historical data within the RIPE Database. Unfortunately you may have noticed there was no progress made on this. What I'll be doing is next week I'll be going through and summarising the discussion that we had from last year, and then bringing that to the Working Group next week or the week after, so we can hopefully finish these up and continue the discussion.

If you have gone to the Database Working Group web page on the RIPE NCC, there is an entry on the left hand side called "Action list." This list was last updated 7 May 2009. I am proposing that we retire it. I will send a link out to the mailing list, so you can review, you have until end of Friday next week. So, you don't have to check it during the RIPE meeting.

As far as other NWIs that we have not yet discussed, the end due by 15, 16, 17, which I already mentioned and 18 are from our NWIs created via the Database Task Force recommendations. We would like you to review this in your time, and once we have finished with NWI 2 and 17, then we would like to jump on to this together.

Potentially, we will be combining the discussion around NWI 15 and NWI 18. 15 is a baseline requirements for registration information. And NWI 18 is operational content information. Since these are somewhat related.

Does anyone have any comments they want to make on any of the existing NWIs, or any NWIs you would like to create within the near future?

Sound like everyone is pretty happy, which we love to see.
.
Is there any other business people would like to cover or discuss? Oh yes, thank you. Randy Bush has sent an e‑mail to the mailing list publishing end site prefix Linx. This is work being done in the IETF. Right now there is a draft that exists. You can ‑‑ Randy thankfully sent a link to the draft that's being worked on. We are quite comfortable recommending to Ed and the team please put it on the IETF process and follow this. There are some minor changes to database attributes that are requested as part of this and then implement those once there is consensus in the IETF list. Is the Working Group comfortable with this? Does everyone like it? Looks like everyone agrees, so thank you.

SPEAKER: If Randy wants to talk to his proposed NWI a little bit, we have a lot of time on the agenda.

RANDY BUSH: I thought I had escaped there for a minute. How many people here remember geofeed? That's interesting. It's in addition ‑‑ let me back up.

Providers of caches, the web thingy that says are you human, are are you going to educate their AI on what a bicycle is. And people who are sending content like video streams etc. Who want to know if this user is an abuser legitimate, so the reputation issue can be tied to address space, and if I am attributing reputation to an address, is that just that 32 or 128, or is it the prefix, and how long is that prefix? So as a provider, I would like to tell that video site that: Hey, I slice up my network by /24s in every city, right, or whatever the width is. So what is proposed here is like the geofeed hack where the RPSL, the inetnum or the inet6num points to a file which lists the address ranges and the prefix length I am using in that address range. And then there is a tool, you don't want the video stream provider hammering the RIPE Database for every access. So there is a tool which the video provider, whoever can use and gets the bulk JSON and the RDAP or the FTP, if you are still doing that, and provides the video provider bulk summary of the data for the entire all five regions etc. So, what this is doing to the database is essentially adding an attribute called "prefix lin", which points, it has an HTTP link to a file. And that fill is a description of the address ranges and prefix lengths. And if you want to see it in we have already done with with the geofeed attribute, so if you want to understand the hack go to geofeed in the RIPE Whois documentation. My friend Rudiger want to adjust my attitude?

RUDIGER VOLK: No. A question and ‑‑ for geofeed, we have the definition and semantic definition in an RFC, if I am right.

RANDY BUSH: It's in a draft for prefix line.

PETER HESSLER: There is a draft on the IETF mailing list right now. Randy, which Working Group at the IETF is it, do you remember?

RANDY BUSH: Not fair asking me questions. The obs area Working Group. It's been accepted and we are just in the final stages.

PETER HESSLER: There is a link to the data tracker.

RANDY BUSH: But if you want I can publish it to the list.

PETER HESSLER: I believe I did this week. Do I have one question and I don't know if it's in scope to this proposal or not. But do you also include how many CGNAT users may be behind.

RANDY BUSH: Yes, there is a section on CGNAT.

PETER HESSLER: Excellent, because I know that's been difficult to communicate.

RANDY BUSH: Yes.

PETER HESSLER: Thank you very much. Okay. David, please go ahead.

DAVID TATILSU: We have got a little bit of an organisational announcement. This is William's last term as a database co‑chair so Peter and I will be continuing as a two co‑chair team. And as a little bit of a thank you for 17 RIPE meetings, or eight and a half years of database chairing, I have got a couple of nice things for you. I walked around this entire conference asking basically everybody for some nice words and some little things of appreciation for you.

(Applause)

WILLIAM SYLVESTER: So, when I joined the Database Working Group, we had had a lot of turmoil on the mailing list, we had a lot of other things going on, and at the time we had a policy that said when you choose Chairs, if there were more than the number of seats available, they drew them out of a hat. This was something that we changed very quickly after that. But at the time Piotr, who is currently on the Board, was the only Chair who had any experience, and he was not the one that got selected out of the hat. And so the rest of us have been sort of figuring things out over the last better part of a decade, and I think we have accomplished a lot of great things, they were talking about geofeed and a few other things, that was NWI 13, for those that were paying attention, and we have accomplished a lot, and I think that Peter and David are definitely well equipped to be better off than we were when we started, but thank you everyone and it's not an end but a beginning, so thank you.

(Applause).

PETER HESSLER: Okay. And before we finish up, there is a few announcements I'd like to make.

The NomCom will old office hours during the second half of all lunch breaks from 13:15 until 14:00 local time. The room upstairs where lunch is, and you go down the, through the glass doors, and follow the noise.

Speak in person to the committee if you would like to.

Additionally you can share feedback on ICP‑2 at the meet‑and‑greet desk. The RIPE PC election vote will go close at 17:00 local time today. Please remember to vote. You only need RIPE NCC access account, which is free and does not require membership.

Speaking of which, if you are a member, please remember to vote in the GM. Additionally, PT NOG will have a feedback meeting in the side room, over there, from 12:30 to 1,300. We would like to thank, of course, Ed, from NCC, a done I say from the NCC for their presentation and work on the database.

I would like to thank Hans for being the scribe, and Ties for the chat monitor. Of course I'd like to thank the tech team and the stenographers. And we'd like to thank you for attending the RIPE Database Working Group at RIPE 90, and we look forward to seeing you next year.
.
Lunch break