As the comment period for the Name Collision report comes to an end there were plenty of comments submitted by applicants and others.
May of the comments simply endorsed the letter of NTAG to ICANN submitted during the comment period, while others siply asked for the comment period to be extended
Here are some of the more interesting submitted comments:
The Afnic/CORE consortium, the registry service provider for the .paris TLD. submitted this comment on behalf of the City of Paris, applicant for the .paris TLD.:
“In the Interisle study, .paris TLD was categorized as a “uncalculated risk” in ICANN’s Proposal to Mitigate Name Collision Risks. ”
“This seems to be based on the following findings:
1) In the 2013 DITL data contains 80,000 occurrences of the string “paris” as a TLD;
2) In the list of internal X.509 certificates issued by well-known certification authorities involving an applied-for TLD on the top level, there are three involving “.paris”, two of which expire in 2013 and the remaining one in 2015.
“We find that it is necessary to state – even before any further analysis of the data is performed – that the label “uncalculated risk” is unfortunate in that it overstates by several orders of magnitude any conceivable security risk to any party.
”
“In particular, it must be noted that DITL data for recent introductions of new TLDs (such as .asia) prior to their launch had proportionally higher “as-TLD” counts. None of these TLD introductions have caused any problem.”
“The threshold below which the Interisle study applies the “low risk” category is a count of 50,000 “as-TLD” queries in the 2013 DITL data. The .paris TLD has 90,000, whereas applied-for TLDs with a count of up to 19.8 million – i.e more than 200 times higher – are in the same category.”
“We understand that the point arbitrarily selected to set the threshold was the one dividing the statistical population between 80% and 20% of the applied-for TLD strings. A look at the data suggests, however, that it would have been much more appropriate to put the threshold at a point where the typical step change in the underlying measurement value from one rank to the next is more significant.”
“This would be the case, for instance, with a 95/5 or with a 97/3 split.”
“At that point, the typical step of change in the underlying measurement value from one rank to the next is in the order of 2%-3% f the category maximum.”
B) Comment on the mitigation proposal
There are many ways to improve the present Mitigation Proposal for the low-risk category without additional risk.
One such solution is to allow TLD operators to request more detailed data from the 2012 and 2013 DITL studies. On this basis, the TLD operators can make their own studies and contact potentially affected parties much earlier. The 120-day waiting time after gTLD Registry Agreement signature can be waived in those cases. Please note that especially TLDs with strong and credible governance environment, such as those supported by government authorities, are subject to possible delays in contract signature due stringent contracting rules.
For most of the TLDs, including most of those in the “uncalculated risk” category, the amount of data involved is quite small if it simply contains the raw DNS query data where the string appears as a TLD in the query.””
Uniregistry an applicant for over 50 new gTLD strings said in part:
“The data does not suggest any answer to whether the concerns raised by the report are actual threats (i.e. that they will happen and will harm people), conjectural threats (i.e. that they could happen but it is unclear whether they will happen either by accident or intent or whether anyone would be harmed), hypothetical threats, or not threats at all. DNS-traffic does not equal a “security problem.”
The sole fact that queries are being received at the root level does not itself present a security risk, especially after the release to the public of the list
of applied-for strings.
Our proposal, is to continue to move forward with the current timeline, and include a trial delegation in an ICANN controlled environment with external observers. This will allow for additional traffic collection which would lead to a real assessment of the risks associated with the new TLD and to the
implementation of reasonable measures to mitigate them. This can all be done in a timely manner with minimal or no impact on the current timeline.””
Donuts the largest applicant for new gTLD’s wrote in part:
“The potential for serious collision involving certificates and non-existent domains (NxDs) has been overstated and can be remediated without delaying any new gTLDs.
Certificate collision is very unlikely unless a precise series of unlikely actions intending harm is put into place, any one of which, if interrupted, removes the risk of harm. Successful delegation of previous gTLDs with pre-existing NxD traffic and the everyday registration of SLDs, with pre-existing NxD traffic by existing registries (such as .COM) has proven that NxD traffic does not cause public end-user harm.
Donuts believes the process leading to the study is as flawed as the staff’s recommendations based on it. The collision issue has been examined for many years, and a last-minute report—produced with no community input—raises significant competition concerns. Applicants will document, in the next 21 days, data that will be far more indicative of the minimal scope of any problem. ”
“Upon review of that data, the ICANN Board should elect to proceed with delegation of all approved gTLDs.”
As context for our comment on NxDs, we restate here some important notes from the NTAG comment:
• The problem is overstated. Overall there are very few NxD requests in applied-for TLDs. Only 3% of the total requests conflict with strings that are actually being considered under the new TLD program. Even this 3% figure may be overstated due to the difference in TTL and the behavior of caching resolvers. Additionally, the 3% figure is further overstated because over 40% of the 3% is caused by the Google Chrome browser performing DNS lookups for random 10-character names. The report also did not compare the 3% NxD traffic to all applied-for TLDs with the NxD traffic that currently exists in existing TLDs, such as .COM. Is the number of NxD queries to all of the applied for TLDs more or less than the same number for one existing TLD? This would be a very useful comparison to gauge whether or not the problem is overstated.
• There is little focus on the real risks. In Section 5 of the report, “Name Collision Etiology”, Interisle attempts to explain the origin of the spurious requests for non-existing domains. We believe that merely counting the number of requests for each string is completely insufficient when judging risk, and that any reasonable conclusions made from the data must take into account the true origin of the “collision”.
• Previous expansion caused no known issues. Analysis using data from January 2006, prior to the launch of several active TLDs, found that .XXX (as but one example) received more pre-delegation queries than any other new TLD. However, .XXX was launched without incident with no identified technical issues since. Other TLDs with pre-existing NxD traffic (.ASIA, .KP, .AX, .UM, .SX—even .CO, similar as it is to .COM) have launched with a) none of the problems predicted by Interisle and b) with no absolutely no approval delays due to “name collisions.”
• Risks already exist. Interisle’s and Verisign’s identified risks exist today, including NXD traffic in .COM. Verisign allows SLDs with pre-existing NxD traffic to be registered every day in .COM yet ICANN is not asking for a suspension of registrations in .COM while such “risks” are mitigated. Neither should a delay be requested in new TLD delegation.
• Risk measurement is easily tampered with.
Any model of risk measurement based on total query counts is flawed.
Donuts believes the counts are not accurate in the first place; we further assert data collected after TLD strings were made public causes query rates higher than they previously were, exacerbating the “negative” result.
Donuts also agrees the Interisle report makes no mention of investigating the possibility that some of these requests were issued intentionally. ICANN should be wary of the precedent that such an easily manipulated metric will be used to make multi-million dollar business decisions.
Donuts points out that the report is missing some critical data. For example, it did not look at NxD traffic in existing TLDs, including .COM.
Nor did it examine specific subdomains that receive NxD traffic in the so-called problem TLDs (the 20%).
If a small number of SLDs in any TLD receive NxD traffic and if it is deemed that pre-existing NxD traffic is an unknown risk (even though its currently ignored in .COM) then those few SLDs could very easily be blocked from registration by the registry, allowing other SLDs in these TLDs to exist and deliver the good to the public for which this whole program is designed.
It’s a remarkable leap to make decisions without such important data.””
Digicert said in part:
“We believe that delaying 20% of the new gTLD strings for further consideration is an unnecessary and overly cautious approach. Although we agree that some of strings need further evaluation, releasing many of the strings categorized as unknown risk will have little impact on Internet security. This overly cautious approach is a result of purely considering the number of potential gTLD collisions without factoring in the other information provided in the Interisle report. We believe that, when the additional data on certificates, SLD information, and total number of domains are considered, only a handful of strings truly need further consideration.
The data presented by Interisle reveals that corp and home account for almost all of the potential collisions at both the SLD and TLD levels. Of the remaining, only the top 30 have more than 1 million queries at the gTLD level.
If the total potential for collisions are evaluated, instead of purely evaluating gTLDs, then ice (21m), global (41m), ads (25m), mail (61m), google (651m), sap (20m), yahoo (65m), apple (34m), amazon (24m), sina (22m), baidu (70m), black (25m), and microsoft (160m) all stand out among the 20% as having a significant number of collisions at any level. Of these, google, sap, yahoo, apple, amazon, sina, baidu, and microsoft are all well-known trade names of organizations. Because all of the queries are likely related directly to the gTLD applicant and the applicant can likely easily remedy the potential for collisions by controlling the gTLD, these strings are considerably lower risk than the other strings.
This leaves ice, global, ads, mail, and black as higher risk strings. Of these, only black is not in the top 30 for gTLD occurances. If the information in Appendix C is weighed into consideration then corp, mail, ads, global, and home stand out as creating a significant risk for potential conflicts with other networks. Mail is especially concerning since it has the second highest number of certificates, which likely corresponds to a large number of internal networks and wide-spread use.
Based on our interpretation of the data, we agree with ICANN that corp and home are high risk. However, we believe this list should be expanded to potentially include mail considering the large number of internal networks using this name.
Of the remaining 20%, only ads and global stood out as having both a significant number of certificates and a significantly high potential for collisions. However, despite the low number of certificates, we believe that ice should be included in the unknown category since its potential for collisions at the gTLD level is high compared to other applicants.
We believe ICANN is taking an overly cautious, yet commendable, approach. Except for the six domains (corp, home, mail, ice, global, and ads) listed in this letter, the proposed strings present a low risk to security. Because this risk is low, we believe that ICANN may proceed processing the remaining applications and move forward towards approval.””
Verisign submitted three comments in addition to the comment it added to the Registry Stakeholders group basically taking the position if the Root system breaks don’t blame us:
Faced with growing evidence of broadly recognized name collision risks and potential SSR issues arising from a premature delegation of new gTLDs into the root zone, including advice from ICANN’s own Security and Stability Advisory Committee (“SSAC”), ICANN has now presented its “New gTLD Collision Risk Mitigation” proposal that, if implemented, would shift the responsibility to ensure the stability and security of the DNS to hundreds of new gTLD applicants after delegation and activation of new gTLDs into the root zone.
Under its proposal, ICANN would effectively wash its hands of the security concerns and the operational, technical or financial responsibility to address them. We believe this shift of responsibility undermines ICANN’s core mission and conflicts with ICANN’s Articles of Incorporation, Bylaws, Code of Conduct and its contractual commitments under the Affirmation of Commitments (“AoC”) with the United States Department of Commerce. Further, we believe ICANN is best positioned to mitigate the risks of naming collisions. ICANN, and not the applicants, should bear the financial costs and retain the legal and reputational risks associated with possible naming collisions.”
”
ICANN has proposed to shift the entire burden of mitigating the risks associated with naming collisions to the new gTLD applicants. Under ICANN’s proposal, applicants are obligated to detect potential naming collisions, to provide notice to impacted parties, and to offer “customer support” to these parties. These burdens and obligations belong to ICANN and cannot and should not be shifted to applicants
First and foremost, ICANN’s approach will not yield a consistent and effective risk mitigation program.
Applicants will each develop different notice and notice techniques and will offer varying levels of “customer support.”
“Furthermore, under ICANN’s plan, an applicant could learn though its notice program that many end users will experience harm once the new gTLD is activated.
Nevertheless, ICANN imposes no requirement to mitigate the harm prior to delegation.
Worse, under ICANN’s plan, the applicant is not required to even tell lCANN that it
has learned during the notice and customer support functions that the new gTLD string will be harmful to end users. The applicant may simply proceed to activation without any further steps. We do not believe lCANN’s plan is likely to lead to effective notice or mitigation.
Moreover, while lCANN has been on notice since at least 2009 that these kinds of risks were possible, and, as noted above, retains sufficient funds to remediate the harm from its new gTLD program, applicants who applied for the new gTLD strings were completely unaware that ICANN would shift these costs and the associated risks to them.
We believe that ICANN’s proposal creates substantial new legal risk to applicants and we believe these risks should be borne by ICANN and not shifted to the applicants.
For example, should ICANN’s proposal be adopted, applicants will have a duty to provide notice of possible risks arising from the activation of the new gTLD. Applicants who fail to effectively perform this duty will face increased legal exposure should the activation cause harm or damage to parties unaware of the potential risks. Similarly, ICANN’s proposal requires that applicants provide “customer support.”
It is likely that some applicants do not have sufficient expertise to perform this task appropriately. Any failure to provide effective assistance could substantially increase an applicant’s legal exposure if end user systems are damaged by ICANN’s new gTLD string. Further, ICANN’s plan shifts the reputational harm that might arise to applicants even though ICANN itself established the new gTLD program and established the Initial Evaluation criteria, and it has been ICANN that has approved each and every string for delegation. It is therefore ICANN, and not individual applicants, who should bear the legal risks and reputational harm that might arise from the notice and mitigation.
We believe ICANN’s risk mitigation proposal should be rejected. The proposal if adopted would undermine ICANN’s mission and other governing documents by shifting the obligation for ensuring security and stability of the DNS to new gTLD applicants. Further, ICANN’s proposal would not create a unified and consistent risk mitigation regime and would be unlikely to be effective. Finally, ICANN should not be permitted to shift the costs and risks, both legal and reputational, to applicants. ICANN has the remit, is best positioned and has the funds to address naming collision mitigation. ICANN should retain responsibility for addressing naming collision mitigation and should bear the associated risks and costs from any failures in this regard.”
The Association for Competitive Technology (“ACT”) comment to ICANN in part said:
“Companies, large and small, have set up their intranets to make use of internal TLDs (iTLDs) with the expectation that certain strings would not be valid DNS TLDs. Because a TLD like .dev has not be assigned, it can be used to assign for internal-use-only network names for servers that provide essential infrastructure such as database, email, and document sharing servers. Critical company resources are built with iTLDs as the only way to access them. These iTLDs are often hard-coded into customized software often created by small businesses which are used by businesses to access to their internal networks.”
Changes which affect the stability of access to that system can have devastating impacts on those varied industries. The current pace of releasing DNS root zone TLDs has the potential to affect that stability.
An iTLD may be used by employees to access a company intranet to check their company email while sitting in a coffee shop or access their files while in an airport. Employees often connect to corporate resources through the creation of a Virtual Private Network (VPN) which creates a private tunnel for the IP traffic between the remote location and the company’s network.
When the employee is not connected to the VPN, the iTLD will not resolve to anything, which is expected.
However, when the response resolves to a new TLD, non-company servers will direct a user to the new owner of the TLD. Furthermore, DNS caching may cause the result of DNS queries pointing to entirely different services depending on the specific sequence in which domains were queried and how long ago they were queried. Suddenly, an employee’s email stops working outside the office when he or she is not on the company server. Indeed, once an employee returns to the office, their email problem may suddenly be resolved with no clear explanation as to the cause. For many, even the most technically savvy, this problem could go on for a significant period of time before it is diagnosed. In the meantime, there is no reliable access to services like email which are vital for many businesses.
Many systems administration manuals suggest that Local Area Networks (LANs) should configure iTLDs as local DNS TLDs for strings that do not exist in the DNS. It’s a widely used and endorsed technique for managing networks. Millions of businesses, from the giants of the industry to small mobile app companies with fewer than ten employees, depend on reliable access to their intranet and the internet to run their business. ”
“If these services suddenly become unreliable, it will create confusion and uncertainty in the business community and has the potential to cause serious financial damage.”
B. Create Security Risks
In addition to confusion, release of TLDs currently used as iTLDs could cause significant security risks. A common tool used to breach security today is to set up a phishing website similar to a legitimate website, like an online banking website, to trick users into entering personal data. Security measures can be put in place, including use of a security badge such as https:// to guard against this type of domain name forgery. The SSL certificates that are used for https encryption are typically granted by verifying the ownership of a particular domain.
If TLDs are released which match those used as iTLDs, the domain name confusion in servers could result in personal information being sent to the owner of the TLD even if security measures are put into place. For example, a mail server, not understanding the change in the status of the TLD, could automatically transmit information like username and password when an individual connects to their email. Because SSL certificates match the entire domain and are signed by a universally accepted signing authority, they would only reinforce a false sense of security, which is the opposite of what they were designed, and depended upon, to do.”
ACT suggests the following recommendations:
ICANN resources should be dedicated to a public awareness campaign of potential problems resulting from a string resolving to a different TLD. A significant danger in assigning a new TLD is the confusion caused, as described in section II.A. A public awareness campaign could work to reduce any harmful effects caused when a query for a TLD string – one that has historically resulted in a negative response – begins to resolve to a new TLD.
ICANN should slow or temporarily suspend the process of delegating TLDs at risk of causing problems due to their frequency of appearance in queries to the root. While we appreciate the designation of .home and .corp as high risk, there are many other TLDs which will also have a significant destructive effect. The snapshot approach used to classify the TLDs does not adequately assess the risk and 120 days delay proposed is not sufficient to inform consumers of the potential problem and allow resolution of the issue. We request that additional time is given in order to resolve these problems.
ICANN should consider reserving specific TLDs permanently for internal use. In order to allow for the consistency the market needs, there needs to be TLDs which can be reliably used for internal use only. As previously mentioned, making the changes required by the release of a TLD will take significant resources. By marking TLDs for internal use only, it ensures that these changes need only be made once and they can be relied upon going forward.”
The At-Large Advisory Committee of ICANN (ALAC) said this in part:
“The ALAC wishes to reiterate its previous Advice to the Board that, in pursuing mitigation actions to minimize residual risk, especially for those strings in the “uncalculated risk” category, ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users.”
“The ALAC remains concerned that this matter is being dealt with at such a late stage of the New gTLD Process.”
“The ALAC urges the Board to investigate how and why this crucial issue could have been ignored for so long and how similar occurrences may be prevented in the future
“he ALAC advises that it is in general concurrence with the proposed risk mitigation actions for the three defined risk categories.
In doing so, the ALAC recognizes that the study, its conclusions, and ICANN’s risk mitigation recommendations are based on analysis of a limited data set of query volume metrics, i.e. how many times queries occur for a proposed new gTLD.
As acknowledged in the study, such metrics are only one perspective of risk and do not reflect other risk that may arise through complex interactions between the DNS and applications at the root level. In particular, the ALAC wishes to reiterate its previous Advice to the Board that, in pursuing mitigation actions to minimize residual risk, especially for those strings in the “uncalculated risk” category, ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users. The Board must err on the side of caution and ensuring that the DNS under ICANN’s auspices remains highly trusted.””
Neustar the registry operator of .us and biz as well a the back end provider for many new gTLD applications said in part:
“While we agree that it is important to address potential collision issues head on, and on balance, we believe that ICANN’s response should be more pro-active, better reflect the need to execute with urgency, and take into account mitigation efforts already underway.
In particular, ICANN’s 80/20 division of applied-for strings appears to be entirely arbitrary, and arbitrarily high.
Staff’s response to the lnterisle report appears to be overly conservative, involving potentially significant delays even in cases where the risk of collision appears to be extremely low.
It is time for ICANN to roll up its sleeves and work with applicants to develop a focused and efficient plan of attack to identify and address real risks and to remove roadblocks to launching new gTLDs where no material risk exists. Neustar urges ICANN to pursue the alternative approach to mitigation outlined in the NTAG response to this consultation, which is both pragmatic and sufficiently conscious of the security and stability issues presented by newTLDs.”