"Can machines think?"
That was the question posed in a seminal paper by Alan Turing, the British mathematician and cryptographer widely considered to be the godfather of artificial intelligence, back in 1950. In "Computing Machinery and Intelligence," Turing proposes a scenario in which "imaginable digital computers"--as he describes them--were able to convincingly imitate the intelligence of a living, breathing human.
Now, in what's been deemed a historic first, the 65-year-old test has been passed by a team comprised of two Russians and one Ukrainian programmer participating in the Turing Test 2014 Competition at the Royal Society in London. There, one in three judges was fooled into thinking that "Eugene Goostman," a 13-year-old, non-native-English speaker from Ukraine, was a real boy--passing the Turing Test's 30% benchmark for the first time in history. Eugene was sprinkled in a mix of 25 real humans, as well a few artificial intelligence programs. In all, 150 five-minute conversations were had.
Update: It turns out that the judges and test were as artificial as Eugene. The event was organized by a well-known prankster. We regret having been duped. On the other hand, chatbots on Twitter are fooling humans into thinking they are real all the time.
Some have gone so far as to call it a "new era of computing." (That isn't quite right, we'll get into why in just a sec.) On paper, Eugene's backstory is nuanced enough to tilt toward convincing: His father is a gynecologist. He owns a pet guinea pig. His family hails from Odessa, the third largest city in Ukraine.
Eugene, however, isn't a supercomputer capable of "deep learning," or even a computer. He is a chatbot, developed originally in 2001, that runs on simple scripts, not unlike strangers spamming you on Twitter or on Aol Instant Messenger back in the day.
In fact, as some critics have pointed out, Eugene's youth gives the chatbot a decided advantage, since he can be forgiven for not knowing the answers to certain questions. As the bot's creator, Vladimir Veselov, puts it: "Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality."
So the question then becomes: Is a groundbreaking AI built on a foundation of ignorance still groundbreaking? Regardless of your answer to that, the Turing Test's administrators see it as an important next step. "The Test has implications for society today," says Kevin Warwick, a visiting professor at the University of Reading. "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime. The Turing Test is a vital tool for combating that threat."
[Flickr user zanaca]