AISB Committee member, and Philosophy Programme Director and Lecturer, Dr Yasemin J. Erden interviewed for the BBC on 29 October 2013. Speaking on the Today programme for BBC Radio 4, as well as the Business Report for BBC world News, Dr Erden discussed CAPTCHA (Completely Automated Public Turing Test To Tell Computers and Humans Apart) technologies, by which websites can differentiate between humans and spambots. They do this by both generating and then assessing simple tests. These are designed to be relatively easy for humans to pass, while difficult for many computer programs. Typically this relies on the use of distorted text and unusual fonts and layouts.
Renewed interest in this technology by the media has arisen because a new software company called Vicarious had in recent days claimed that their algorithms could now "reliably solve modern CAPTCHAs, including Google’s reCAPTCHA", with surprisingly high success rates of "up to 90% on modern CAPTCHAs from Google, Yahoo, PayPal, Captcha.com, and others" .
Yasemin explained to the BBC correspondents that even if these levels of success had been achieved (and it is difficult to asses this because the company have so far refused to disclose full details of the algorithm and information pertaining to these results), this does not mean that there are not alternative methods for CAPTCHA tests available. These including pattern recognition ("put the food products into the shopping basket"; trivia (e.g. "what colour is grass?") or very simple mathematical puzzles (e.g. 1+3 = ?).
In addition to this, some far stronger claims were made as a result of these apparent developments by Vicarious, including that they had created technology that could pass the Turing test, or that would lead to greater Artificial Intelligence (AI) capabilities. To this Yasemin responded that first of all, the CAPTCHA technologies are rightly considered 'reverse' Turing tests , since the original test was supposed to assess whether a machine could fool a human that it was also human (or in fact, that it was female), whereas the goal of CAPTCHA technologies is to allow a computer to distinguish between humans and machines. It is in fact far easier to fool a machine in these respects, then it is to fool a human. Secondly, there has been no evidence given as to the way in which successes such as these (if indeed there has been success--see comment above) would lead to any claims for better AI systems broadly speaking. Until we see further evidence for this (including peer-revewied research) then there is as yet little reason to accept such claims.
 Elie Bursztein, Matthieu Martin, and John C. Mitchell, 'Text-based CAPTCHA Strengths and Weaknesses'. ACM Conference 2011. Online: http://cdn.ly.tl/publications/text-based-captcha-strengths-and-weaknesses.pdf