In the second part 2 of a 5 part series of articles released by Verisign on its blog was written by Danny McPherson on the possible dangers of the impending rollout of the new gTLD program.
Basically this post chats about the DNS system, the root servers and the fact that the delegation of 1,000 new gTLD every year is a huge increase from what has been minimal growth over the years and urges ICANN to put into place an “early warning system …..to ensure that we have a well vetted “brakes” framework codified to be able to halt changes, rollback, and recover from any impacts that the delegation of new gTLD strings may illicit.”
Here is some highlights from the post:
“The DNS is on the cusp of expanding profoundly in places where it’s otherwise been stable for decades and absent some explicit action may do so in a very dangerous manner.”
“The DNS root server system, which serves a critical function in the stable operation of the Internet. Made up of 13 root servers operated by 12 different organizations, the root server system makes available the DNS’s root zone, which holds the lists of all domain names and addresses for the authoritative servers for all 317 existing top-level domains (TLDs), like .com, .net, .gov, .org, and .edu to name a few.
Every time an Internet user accesses information on the Internet, their application (e.g,. a web browser) talks to their local operating system (e.g., Windows) that works to resolve the name of the computer where the information resides to an Internet address in order to enable access to the information. I
f the Internet address that maps to the name isn’t known already, the program will ask domain name servers on the Internet where to find it. Those domain name servers start at the root, which is the authoritative source for the global DNS, and follow a series of delegations that proceed downward until they get the IP address that maps to the domain name they desire.
Only then can the application connect and obtain the information the user desires. And this all happens in a fraction of a second.
Even though the Internet has grown exponentially in the last decade, throughout this immense growth period the DNS root zone contents have been quite stable, with an average growth rate of less than 5 net new TLDs per year. ”
“The growth rate of the root zone has been fairly modest over the past 15 years, adding only 66 net new TLDs since 1999 (45 of which are internationalized domain names, or IDNs).
Twice per day, whether there have been root zone changes (e.g., new name servers, IPv6 addresses, new TLD delegations, etc.) or not, Verisign generates a new root zone file from this database and makes it available to all the root server operators.
Between June of 2008 and March of 2013 there were 1,371 total changes (~0.8 changes/day average), adding only 37 net new TLDs.
“The relatively low rate of churn in the root zone and slow growth has contributed to the stability of the zone over the years. However, the proposed rate of introduction of new gTLDs (and their corresponding name server, IPv4 and IPv6 addresses, and DNSSEC records) being discussed today introduce much more volatility to the root zone and will likely increase the root zone size considerably.”
The current tide seems to suggest that a maximum of 1,000 new gTLDs per year would be an acceptable provisioning threshold into the root once delegations begin. We used that number as a basis to estimate a rate of 20 per week of additional TLDs and simply projected the scale of the root zone contents from June 2013 out 50 weeks (leading to a total of 1,317 total TLDs and associated resource records at completion of the exercise).
Over the past 14 years the average rate of introduction on new gTLDs was ~0.12 TLDs/week (66 total).
Our 20 new delegations per week is only an assumption as the mechanisms and policy framework to manage this are not clearly apparent (i.e., not available or even published) at this time.
Yet, this increases the rate of growth by a factor of 174.
While in and of itself an increase of ~1,000 delegations doesn’t represent a large zone file relative to other zones and systems in the Internet, it’s certainly a marked one for the root zone itself that could have far-reaching implications – there’s a lot at risk if something breaks in the root, as everything below it (i.e., everything) could be impacted.
Couple this risk with the fact that the root server system is operated by 12 different organizations with vastly different capabilities, it’s dependent upon hundreds of discrete servers globally to operate correctly, and that no holistic system exists to monitor, instrument, or identify stresses or potentially at risk strings.
Some would argue that the stability and security of the entire new gTLD program (and existing registries) was predicated on the existence of such an early warning system and instrumentation capability; after all, nearly all navigation on the Internet ultimately begins at the root.
However, to begin understanding the effects of a much more volatile root zone, we must first understand the current query dynamics to the root server system for each proposed new gTLD string before these rapid changes occur, we must put an early warning system in place as has been called for by all the requisite technical advisory committees to ICANN, and we must specify a careful new gTLD delegation and corresponding impact analysis capability, and we must ensure that we have a well vetted “brakes” framework codified to be able to halt changes, rollback, and recover from any impacts that the delegation of new gTLD strings may illicit. “