Many are familiar with the Turing Test, named for computing pioneer Alan Turing, in which a machine attempts to pass as human in a written chat with a person. Despite a few high-profile claims of success, the machines have so far failed — but surprisingly, a few humans have failed to be recognized as such, too. A new paper presents several instances during official Turing Test chats where the "judge" incorrectly identified the chat partner as a machine.
Reading the transcripts, it's easy to see why. The "hidden humans" are alternately guarded, humorless, uninformed and bad typists — leading judges to conclude that they are machines attempting to avoid detection. The study, in the Journal of Experimental and Theoretical Artificial Intelligence, proposes various reasons why judges fell prey to this curious underestimation of their chat partner's abilities, called the "confederate effect." An interesting flaw, but work goes on regardless, as one of the journal's editors, Pail Naish, explains: "Within Artificial Intelligence academic communities it is a milestone or a benchmark to aim towards and a lot of research continues to be done in this area."
— Devin Coldewey, NBC News