A project from Facebook's research division is attempting to rethink face recognition by imitating the way our own neurons deal with visual input — with promising results. "DeepFace" nearly matches human accuracy in telling whether two faces are the same or different.
Instead of a system where the computer attempts to tell whether a given photo is similar to other photos (matching colors, size and shape), DeepFace analyzes data in a very abstract way, with hardly any information about what faces look like to begin with, or where eyes ought to be.
In your brain, there are groups of neurons in the visual system that respond to, for example, vertical lines but not horizontal ones, or to curves but not straight edges. The DeepFace system simulates neural networks like this, doing millions of simple analyses in a fraction of a second. Where is the darkest point in the picture? Where is the longest unbroken line? How far is it from this local maximum to that one?
"Facial regions, such as eyes, tend to fall in the same location,"explained a Facebook representative in an email to NBC News. "The deep neural network learns specialized filters for each such region in order to differentiate people. We do not control what patterns to look for."
It's not quite the same as your brain (much of the visual system is still poorly understood), but it's nearly as effective: In determining whether two images showed the same face, the new system was correct 97.25 percent of the time. That's incredibly close to the human rate of 97.53 percent.
Of course, this is just a comparison of two faces, a relatively simple task compared with more demanding things like matching a given face to a database of thousands or millions. Still, the neural-network approach appears viable even at this early stage — research will no doubt continue.
DeepFace is a long way from a state where it could be used by Facebook itself — traditional systems are still more practical for the day-to-day needs of the social network.