03/24/2026 | Press release | Distributed by Public on 03/23/2026 23:24
For some time, I have been looking after a routing analysis report called the CIDR Report. I'd like to explain the reasons for this report, what is in the report, and share some thoughts as to its usefulness today to the Internet routing community.
Why the CIDR Report?
To place this into context, we need to head back to the Internet of the 1980s, when a US ARPA-funded research project investigating wide-area packet switching technologies transitioned into the core platform of the entire global digital communications environment. The US National Science Foundation decided to act as the lead funding agency for a national research backbone computer network. US academic and research institutions were connected to national supercomputer centres and each other via a two-tier structure of eleven regional networks and the NSF-supported NSFNET backbone, all using the IP protocol.
It wasn't the only national research and scientific backbone network at the time - NASA operated the NASA Science Internet (NSI), and the US Department of Energy supported the High Energy Physics Network (HEPNET). Both initially ran on the DECnet protocol, but were upgraded with multi-protocol routers to add IP support. This rapid expansion created new routing challenges, and the early Gateway-to-Gateway Protocol (GGP) was struggling with growing pains.
In January 1989, IBM's Yakov Rekhter and Cisco's Kirk Lougheed came up with the Border Gateway Protocol (BGP). It was a simple distance vector protocol with four critical aspects:
The protocol design of BGP resonated with the emerging Internet. Each component network could operate autonomously, including the choice of internal routing protocol, and could use BGP to set inter-network routing policies used in connecting to other networks. In this sense, they were 'autonomous' networks. The self-organizing structure of peer networks in BGP was a very good match to the needs of the emerging Internet, and this has continued to this day.
Addresses - from class to classless
A routing protocol passes reachability information about endpoint addresses, and the internal structure of the network's address plan matters to the architecture of the routing protocol.
The Internet's address plan was like the E.164 international numbering plan for the global telephone network, where:
The difference between these two address plans lies in the definition of their component networks. In the E.164 telephone address plan, each component network was a national telephone system, so there was a 'natural' limit of about 200 such network prefixes, corresponding to the number of national telephone networks. The Internet started from a different point, where component networks were interconnected local area networks. If an entity, such as a university or research institute, operated several distinct local area networks, then it used several distinct network prefixes.
A more significant point of difference was IP's choice of a stateless packet forwarding paradigm, avoiding the support of dynamic virtual circuits. This allowed for far simpler (and cheaper) networks, as they did not incur the overhead of supporting the operation of virtual circuits as overlays on the underlying network. Stateless packet forwarding requires that every packet have its destination address inscribed into the IP packet header. The constraints of packet forwarding capacity came down on the side of a fixed-length IP address that contained both the network prefix and the local device address.
The network's architects were faced with an almost intractable problem: How to partition a fixed-sized address into network prefix and device address parts so that the diversity of local networks (large and small) could be encompassed.
Before 1994, the address plan used by the Internet was what we called class-based addresses, and the 32-bit address pool was divided into three sizes of address blocks. There were 126 distinct Class A 8-bit network prefixes, each able to encompass 16,777,216 individual device addresses, 16,383 Class B 16-bit network prefixes, each with 65,536 device addresses, and 2,097,152 Class C 24-bit network prefixes, each with 256 device addresses.
As the Internet expanded into the research community in the late 1980s, Class B addresses were consumed rapidly and faced exhaustion by the end of that decade. We needed to shift away from a static class-based address plan within the Internet, including changing the routing protocols that we were using.
Scaling routing
The number of routes (network prefixes) being passed into the inter-domain routing space was rising rapidly. The stateless datagram IP architecture meant that every BGP router that connected to the inter-domain space was carrying a full routing table in its forwarding memory structures.
As the number of routes increased, routers needed more memory to hold them and more time to perform address lookups. At the same time, increasing circuit speeds in wide area networks was decreasing the available per-packet processing time for routers, requiring more work in less available time.
A view of the size of the Internet's routing tables up to the end of 1994 is shown in Figure 1. The period from 1988 to January 1994 uses monthly reports of the size of the BGP routing table in the NSFNET from Merit, the operator of the NSFNET backbone. From January, the data series shifts to an hourly collection. This six-year view includes a significant transition for the Internet in March 1994, when BGP-4 was widely deployed, doing away with class-structure in IP addresses in favour of 'Classless Internet-Domain Routing' (CIDR).
The period from January to March 1994 saw a 25% growth in the size of the routing table, and the introduction of CIDR in around March 1994 saw the size of the routing table drop from 20,000 entries to 18,000 over a couple of weeks.
From 1995 to 1999, the routing table continued to grow, but the growth rate of the routing table was far lower than the peak growth rate observed at the start of 1994. The growth trend from 1995 until 1999 was a largely linear trend, growing by about 8,000 additional routing entries per year. Slower growth in the routing table could be accommodated within the operational life cycles of core routing equipment, so up until the start of 1999, the growth rate of the routing system was not a cause for significant concern (Figure 2).
The picture changed once more at the end of the 1990s. The two-year period across 1999 and 2000 was the peak of an Internet boom, and the routing table had resumed an exponential growth trend. This was a short-lived period of Internet euphoria, and the ensuing bust appeared in the routing table in 2001, where the routing table held steady at 105,000 entries across all of 2001.
This period of stasis was short-lived, however, and routing table growth resumed exponentially in 2002. The drivers at this time were the adoption of DSL 'always on' residential services in the consumer markets of many economies at the start of that decade. In 2007, the Apple iPhone was launched, expanding the Internet into mobile services. The Internet's routing table grew by 100,000 entries, a growth rate of 50%, between 2007 and 2010.
More specific routes
What was driving this growth in the routing system? The explosive introduction of mobile devices into the Internet was one of the driving factors, but this growth pressure was moderated by the increasing use of Network Address Translation (NAT) in these mobile access networks. Increasing use of overlapping more specific route advertisements, a routing practice that is enabled by CIDR, was an important factor. For example, a route for is a more specificroute of a covering aggregate of .
Why would network operators do this? The BGP path selection algorithm will select a more specific route in favour of a covering aggregate route. For example, a network with external connections to providers A and B may want to balance the incoming traffic across the two connections.
One way of achieving this is for the local network to advertise the covering aggregate across both connections, but then augment this with more specific route advertisements to the provider where you want to increase the incoming traffic volume. Where a network has a rich set of external connections and wants to optimize the incoming traffic volume across these connections, the use of more specifics can be a very effective means of traffic engineering.
Defensive routing provides another motivation. An attacker may inject more specific routes for a target's address prefixes into the routing system to divert traffic, and thereby steal all the traffic being directed to these addresses, taking advantage of BGP's preference to use more specifics over aggregates. By defensively advertising more specific routes, the potential damage radius of a hostile more specific route can be minimized.
We can look at the total number of more specifics in the BGP routing table between 2000 and 2009 in Figure 4.
In the year 2000, more specifics accounted for 55% of the total count of routes in the Internet's routing table, yet they encompassed less that 10% of the advertised address span. If this trend were to continue unabated, the routing table would quickly grow to sizes that exceeded the capabilities of routing hardware then available to network operators, which would have been a crisis point for the Internet.
This is a classic form of 'Tragedy of the Commons '. A network operator's individual self-interest lies in exercising greater control over traffic flows and greater levels of resilience against hostile routing attacks through the control of advertisement of more specific routes into the inter-domain routing system. The collective outcome of these actions results in the inter-domain routing space becoming bloated, leading to an Internet that is large enough to be simply unrouteable.
There are a few ways that we can exercise control over this form of behaviour by network operators. Don't forget that the design of BGP means that there is no overall control function and no one is in control. It's a network of peer networks. Collective action by hundreds of thousands of individual network operators to limit the extent of advertisement of more specific routes across the global Internet is not an available option.
The CIDR Report began as a way of increasing the level of public awareness of network operators' routing practices, illustrating their networks' contribution to the current state of the Internet's routing table. If we can't directly control the use of more specific advertisements in the inter-domain routing space, we can expose the extent of this behaviour and name those network operators that are excessively advertising these more specific routes.
The hope here is that some form of self-moderation or peer pressure might influence these network operators to reduce the impact of their network's advertisements within the larger picture of the Internet's inter-domain routing environment.
And that's the rationale for the CIDR Report.
Let's now move on to the report itself.
What is in the CIDR Report?
The CIDR Report was introduced in the late 1990s. As we've noted, the intention of this report was to make the impact of the advertising of more specific routes into the global BGP routing system more visible, and identify networks whose routing practices were significantly bloating in the Internet's inter-domain routing system.
The CIDR Report was first produced by Tony Bates, then taken on by Philip Smith, who then passed the baton to me. These days, the report operates on a platform provided by APNIC, using snapshots of the global routing system assembled by BGP at AS 131072, an Autonomous System Number (ASN) used by APNIC Labs. The report is produced daily.
A snapshot of the header of the report is shown in Figure 5.
The report has five parts:
Status summary
The status summary is a condensed summary of the state of aggregation in the BGP routing table. The table shows the size of the BGP IPv4 routing table, and the size if all redundant more specifics were removed, for each day over the past week, and a plot of the hour-by-hour total size of the routing table for that week. (Figure 6). There is an IPv6 version of the CIDR Report, containing the same information for the IPv6 interdomain environment.
A redundant more-specific route is a prefix advertisement that has an identical AS-PATH to that of its immediately encompassing aggregate route. In terms of the application of local routing policies, the more specific route does not have any tangible impact on the local forwarding function. In March 2026, 461,596 advertised routes fall into this class of more specific routes.
This initial section of the report also contains some statistics relating to the number of ASes in the routing table (Figure 7).
Aggregation summary
The next section of this report lists the 30 networks that have the highest count of these redundant more specific route advertisements (Figure 8).
Each AS in this list points to an individual report that shows suggested route withdrawals that could minimize the routing footprint of this network, while preserving the intended functionality of the network's traffic engineering policies.
Last week's changes
The next section looks at the changes to the routing table that occurred in the past seven days. This is a section with multiple lists.
The first list is a list of those ASNs that originate routes that were not visible in the routing table seven days ago, and the number of routes they are currently originating. The second list is those ASNs that have an increased total number of originating routes over the past week, ordered by the count of additional routes. The next list is those ASNs that have reduced the total number of originating routes over the past week, ordered by the count of decreased routes. The next two are ordered lists of networks that have added and removed routes in the past week.
Finally, this section looks at the total number of route additions and removals by prefix size.
More specifics
A list of route advertisements that are more specific than the original class-based prefix mask, or more specific than the registry allocation size. There was a view at one point that routing announcements should align with the prefix sizes that were allocated or assigned to the network. This view is no longer held within the routing community.
Possible bogon routes and AS announcements
A bogon is a route describing an address block that is not currently assigned or allocated by a Regional Internet Registry (RIR) to any network. This section of the report lists all those route objects that contain an address prefix that is not currently assigned or allocated. The route descriptions also list the network AS numbers that originate these routes into BGP.
The second part of this report section lists bogon ASes, and the immediate upstream AS that is observed to be propagating a path for the bogon AS.
And that's the CIDR Report.
Evaluating the CIDR Report
Has the CIDR Report made a difference to the state of the interdomain routing system over all this time?
One way to evaluate this is to look at the data for more specific routes. We can look at the long-term ratio of the number of more specific routes to the total route count, and the plot of this data over the past decade is shown in Figures 9 and 10.
These numbers make it clear that the growth of more specific routes is not improving in IPv4 or IPv6. This suggests that whatever influence the CIDR Report once had on routing practices more than twenty years ago has largely faded.
This report was examined in detail in a 2011 academic study by Stephen Woodrow at MIT, with a summary presented at NANOG 53 that same year. He concluded that while the CIDR Report had a meaningful impact in its early years, its influence had steadily declined, leaving it with limited relevance in the routing community even in 2011 (and arguably, the report has far less relevance 15 years later).
The simple observation is that we've managed to come to terms with the Internet's large routing table. While a smaller routing table could generate some efficiencies in routing and forwarding, there is no collective incentive strong enough to motivate individual networks to strictly manage their announcements or significantly reduce the number of more specific routes.
Part of the larger routing scaling issue was attempting to perform destination address lookups in routing hardware within the elapsed time of processing each packet. The combination of higher line speeds and larger lookup tables exacerbated this problem. Faster hardware in routers, particularly relating to high-speed content-addressable memory, made things better.
A more recent response by the vendors of large high-speed routers is to perform their own form of in-router aggregation by using a technique termed 'Forwarding Information Base (FIB) compression'. This is a form of proxy aggregation of route objects, where adjacent address prefixes that share a common forwarding state in the route can be aggregated into a single address prefix within the router's internal forwarding tables.
There is a further factor at play here that I would term as 'the death of transit' in the Internet. These days, content and services are extensively replicated across the network using data centres located close to populations of end users. The majority of traffic volume from user-triggered Internet transactions does not involve packets being passed through long-haul transit routes. The routing that manages these long-haul lines is nowhere near as critical as it was a couple of decades ago. Instead, the Internet is largely being used as a collection of single-hop last-mile edge networks. The 'glue' that defines a common Internet environment does not lie in a common address system or a common routing environment, but in a common name system. So, where does that leave the CIDR Report? I'd call it a largely historic artefact with little in terms of important direct relevance to the business of operating today's Internet. We've just moved on from inter-domain routing in the Internet.