Humans Mistake Humans For Machines During Turing Tests

No machine has yet succeeded to reliably convince people it's real — and in fact, a few humans have failed, too, as a new study points out.

SHARE THIS —

Many are familiar with the Turing Test, named for computing pioneer Alan Turing, in which a machine attempts to pass as human in a written chat with a person. Despite a few high-profile claims of success, the machines have so far failed — but surprisingly, a few humans have failed to be recognized as such, too. A new paper presents several instances during official Turing Test chats where the "judge" incorrectly identified the chat partner as a machine.

Reading the transcripts, it's easy to see why. The "hidden humans" are alternately guarded, humorless, uninformed and bad typists — leading judges to conclude that they are machines attempting to avoid detection. The study, in the Journal of Experimental and Theoretical Artificial Intelligence, proposes various reasons why judges fell prey to this curious underestimation of their chat partner's abilities, called the "confederate effect." An interesting flaw, but work goes on regardless, as one of the journal's editors, Pail Naish, explains: "Within Artificial Intelligence academic communities it is a milestone or a benchmark to aim towards and a lot of research continues to be done in this area."

Sign up for top Technology news delivered direct to your inbox

IN-DEPTH

Microsoft's 'Project Adam' Ups the Ante in Artificial Intelligence

Happy 100th birthday, Alan Turing

Queen Pardons Computing Giant Alan Turing 59 Years After His Suicide

SOCIAL

— Devin Coldewey, NBC News