Just one day after Kurt Pritz took over direct oversight of the entire New gTLD program, ICANN announced that it suspended the Digital Archery portion of the New Generic Top-level Domain Program “pending further analysis of the process”, thereby throwing the whole batching process of new gTLD applications into limbo.
“The primary reason is that applicants have reported that the timestamp system returns unexpected results depending on circumstances. ”
“Independent analysis also confirmed the variances, some as a result of network latency, others as a result of how the timestamp system responds under differing circumstances.”
“The timestamp window was due to close on 28 June. ”
As of 23 June, approximately 20 percent of applications had a registered timestamp.”
“Given public comment regarding the timestamp process and that many applicants had yet to register a timestamp, the decision was taken to suspend the system now, pending further analysis of the process.”
“The evaluation process will continue to be executed as designed. Independent firms are already performing test evaluations to promote consistent application of evaluation criteria. The time it takes to delegate TLDs will depend on the number and timing of batches.”
“The suspension provides time to investigate technical concerns. ICANN’s staff and Board will continue to listen to community comment about digital archery and batching.”
“The information gathered from community input to date and here in Prague will be weighed by the New gTLD Committee of the Board.”
The Committee will work to ensure that community sentiment is fully understood and to avoid disruption to the evaluation schedule.”
Many companies started Digital Archery Service for applicants who wanted to increase their chances of landing in the 1st round. Most, if not all of these were to be paid only on successful completion of getting into the top 1 or 2 batches.
That business model maybe dead and those who paid in advance may want to ask for a refund.
Joe says
What a mess! The whole new gTLD thing is getting ridiculous.
Jp says
Coming from a programmer,
Digital archery was/is a very cool idea. In theory it works. In practicality/reality it requires a lot of technological things to work “correctly”. And by correctly I mean perfectly.
Technology rarely if ever works perfectly. If there is one thing that you can count on with technology it is that it will break. Smart design includes for what to do if technology breaks or to compensate for imperfections.
In other words a precise system that depends on technology is inherently flawed. There were/are more reliable options I’m sure.
Worse yet, even if digital archery ends up being used and goes off perfectly, if someone questions it there is no real way to undenyably prove that the system worked perfectly precisely. It would be their word against ICANN.
Michael H. Berkens says
Jp
Well one problem with this is that there were at least 5 companies doing Digital Archery all saying they were getting results down to .000 milliseconds meaning that the whole process would have been mitigated and of course since this required extra funds to achieve it, those better funded could make use of the technology while less funded applicants would get pushes further to the back
Jp says
@mhb
Yep and those are the socio economic flaws with it. I don’t know enough about the process if whether or not the discounted applicants/financial aid applicants were also included in the digital archery process but if they were they certainly don’t have the resources to compete.
May the smartest win, is ok.
May the richest win, not as socially popular.
That aside, even these companies and their “perfect” systems, at least one of them is/was bound to experience some sort of unfortunate technical difficulty. Maybe the power will go out, maybe their iternet link will go down. It doesn’t have to be a problem in their center either. There coul be a DOS attack elsewhere on the network of their ISP causing their route to be slower than expected. Somebody somewhere else in the world could simply make some sort of mistake. ICANN’s servers could have some technical problems. Technology is not to be relied upon for precise things.
For a wild story, about a year ago here in Indonesia there was a big storm. Lightning struck the building that houses te countries main hub for Internet infrastructure. The building burned. There was almost no Internet in the entire country for 2 days. The backup routes were flooded and couldn’t handle the traffic. Only cell phone internet was working and barely. Routes were given preference to vital institutions. All it was, was just a lightning strike.
ICANN.REFUND says
“those who paid in advance may want to ask for a refund”
100% Refunds are in order for any ICANN Applicant unless they want to stay in for the Class Action Lawsuits now being prepared
You could visit the ICANN.REFUND web-site but that of course is censored
Me says
Independent analysis had shown the variances that customers have been reporting. Basically, no matter who is claiming that they can hit the mark every time or reliably obtain 000ms results, ICANN has announced that they have shown their system to be inaccurate. Anyone guaranteeing to be able to get 000ms every single time is lying.
000ms results are possible — I’ve seen them, and at least one company has published a video showing them get more than one 000 in a row. Unfortunately ICANN’s current system is not robust enough to reliably allow 1900 applicants to shoot at their target without interfering with each other’s times.
When you analyze the basic setup of ICANN’s system (when compared to highly scalable systems used by the likes of Google, Twitter and Facebook) you can clearly see the potential for timing issues.
ICANN is using a Citrix-based virtual environment / remote desktop system whereby the applicants have to log into the remote desktop of a virtual machine (using the same proprietary technology used by GoToMeeting), launch an application running from a headless Internet Explorer window, and click a button at the right time for it to presumably POST to a backend webserver (likely IIS running .aspx) which likely also uses a database (perhaps Oracle) to record the time.
If it is the central database which is recording the timestamp, this may depend on acquiring a lock on a table. If several hundred people are aiming for the same millisecond, these database inserts and resulting timestamps cannot help but interfere with each other.
Weighing all the variables involved — virtual machines, a proprietary remote desktop protocol, remote control of a web-browser (and IE of all browsers — not the most responsive), form-posting to an internal server, and likely a database component on top of it, it’s surprising that people are getting 000ms results at all.
How can this be fixed? First of all, based on the numbers people have been seeing, it is clear that ICANN did not allocate enough resources to handle the load of several hundred people hitting their system (which isn’t a lot for most systems, but is a lot for remote-desktoping into a farm of virtual machines). the key issue here seems to be that it appears that ICANN is using a centralized database to generate the timestamps.
If everyone is supposed to be aiming for their own personal target, then it should be the assigned Virtual Machine server that they are hitting that should generate the timestamp. This way it would not be depending on a secondary (or perhaps even tertiary) external system.
Those businesses who’ve currently invested heavily in this competition deserve a fair shot. Hopefully ICANN can get their act together.
D. T. Manaker says
I hope that ICANN pulls the DA system. “Digital Archery” is a brilliant concept with many applications way beyond what ICANN was prescribing it for. Look for a new cottage industry to start up with many “Digital Archery type” apps, games, contests, lotteries, skilled competitions, etc., etc., etc. ICANN may have accidentally invented an entire digital industry. Wouldnt that be something. And nothing proprietary to it at this point. (other than the folks that own the Digital Archery domain names).
76 says
how fair is it when one applicant can get “closer to the target” than another? closer in physical terms. are the icann servers anycasted?
certain applicants have a huge advantage. e.g. those registrars who have played the drop for many years. they position their servers close to the registry’s. and they can bring DOS-level saturation when registering names.
no one can compete with them on drop catching because they are geographically closer and can effectively saturate the registry’s resources. not to mention the private agreements some have with some registries.
how is this d.a. program any different?
the system is biased toward certain applicants. if you are one of those, of course you think this is fair for whatever your reasons.
but if you are not one of these applicants, the picture looks much different.
d.a. may be a nice concept. but in practice it will be anything but “fair”.
physical distance and computing power makes a difference.
DA dev says
I dont think the DA system uses a central time stamp. Reconnecting to TAS gives a different clock offset generally. I would like to see how they are technically implementing their app. It is impossible to give ms accurate results to multiple clients hitting the server at the same time from a regular web app.
It is very possible to get 000 ms shots, even multiple in a row, but you cannot guarantee or predict the result. You might get dialed in and then get a 030 shot next. At times it seems completely random.
Without revealing details, any decent programmer with a small amount of resources can mitigate most of the “unfairness” of the system claimed by other comments.
dArchery NOT needed because Batches NOT needed says
dArchery NOT needed because Batches NOT needed
Do you really think ICANN is going to “vet” Google, Amazon, et al hundreds of times ?
FaceBook, Twitter and eBAY chose not to play.
The U.S. Government has said the GAC limit is 100 new gTLDs. The insiders should be able to come up with that list in a few hours, along with the Registries that want a cut of the action.
ICANN should be able to toss their 350+ page Applicant Guide and get on with the task of getting something done. ICANN does not need to “batch” their stone-walling.
Firing 80% of the Staff would be a good start for the new CEO. Refunding Applicants would be another major priority.
Launching 100 gTLDs should be trivial with the Applicants assembled.
Jp says
Yes there essentially no such thing as parallel computing (with the exception of some very special computers that I doubt ICANN is using). If two people hit the target at precisely the same time the system will basically arbitrarily (but not so arbitrary) which person gets committed to whatever storage medium first. Regarding what someone said about databse locks, very true also but I think it depends as well on how the queries are written. I don’t think an Insert locks more than a row if any. You can’t set an isolation level for an insert statement.
ICANN could have a Seperate computer for each parallel DA contestant however then we’d have a lot of computers in which nothing can go wrong with and further how can you be so sure they are all perfectly identical in performance. Not all hard drives, fans, etc… Are created equal.
Still DA is such a cool idea, it just needs to find its proper application first.
Me says
Evidence has been collected to prove that ICANN’s system is struggling under the fire of too many applicants aiming at the same millisecond target. While this may not prove that they are using a central database, whatever the case may be, when several hundred people are trying to get their click in at millisecond zero, there can be an interference pattern resulting in timestamp variations between 30 and 50ms, and sometimes even over 100ms. There are simple tests which can clearly display these patterns, and under what conditions they occur.
There is such a thing as parallel computing — that is when you have multiple systems operating independently, and not depending on a shared resource (i.e. central database). For a mere 1000 people hitting a farm of servers around the same millisecond, to simply record the current local server time before executing any other processing would not result in significant offsets or interference. The only explanation is a that processing is occurring before the timestamp is recorded, and that this processing is visibly affected by load.
Me says
BTW, one of the testing methods involved purposely aiming off the mark (i.e. seven seconds and zero milliseconds as opposed to zero seconds and zero milliseconds). During a wide sample of alternating between aiming for 00.000 and 07.000, the resulting variation in server reported timestamps were more than double when aiming for 00.000 (when everyone else was firing) as opposed to 07.000 (when nobody else was firing). Clearly their system was unable to scale and operate reliably under real-world load.