Is that a van Gogh?
A mathematical program that began as a lark for an Israeli scientist has become a serious effort to match some of the world’s greatest painters with their masterpieces. If the project pans out, it could help point out poor copies and eventually distinguish forgeries from the real deal.
Daniel Keren, a professor in the Department of Computer Science at the University of Haifa, said he’s been contacted by an Italian collector hoping to validate some of his acquired paintings as well as by aficionados embroiled in a controversy over the legitimacy of artworks allegedly by Dutch master Vincent van Gogh.
“I did it for fun, but now people are interested in it, so I will definitely expand,” Keren said.
Research in the rapidly growing field of computer vision, he said, still has plenty of catching up to do if scientists want computers to approximate our own abilities. One stumbling block has been teaching machines how to spot objects that are simple for people to recognize — another human face, for example.
Art as a mathematical formula
For his project, Keren tackled the problem by essentially breaking visually stunning masterpieces into sets of mathematical formulas. The computer program sought to capture the distinctive styles of different artists by dividing their paintings into discrete blocks and then converting each block into formulas that could be added together and compared.
“Suppose that one painter, he has very many vertical structures,” Keren said. Perhaps the painter favors depicting telephone poles, say, or skyscrapers. Converting blocks from that painting into mathematical symbols similar to the sine and cosine waves familiar to any trigonometry student will yield a distinctive sum of the parts. If another artist paints primarily with horizontal lines — perhaps in the form of logs floating down a river — “in that case, it’s very easy to detect who is painter A and who is painter B.” If a painting includes examples of both styles, the program can color-code each element accordingly to help decide if the whole piece is more A-like or B-like.
So far, Keren and his team have applied the test to five artists, including van Gogh and Rembrandt, surrealists Salvador Dalí and René Magritte, and Russian abstract painter Wassily Kandinsky.
Altogether, Keren’s group used about 30 artworks from each of the five painters, half for the training sessions and half for testing their mathematical model. In all, the model correctly matched 86 percent of paintings it hadn’t previously “seen,” a solid B in most grading schemes. (If the program had been assigning the paintings randomly, it would have received a score of only 20 percent.)
The current incarnation might be of use to an art novice, though hardly helpful to an expert, Keren acknowledges. “I am sure it can be improved,” he said. A key to its continued development will be determining exactly how two paintings differ. If the subject matter is dissimilar but the style is the same, the computer likely will be able to identify the right artist, based on its past learning of core elements such as van Gogh’s characteristic use of swirls or Magritte’s preference for straight lines.
A sudden switch in painting techniques by the same artist, on the other hand, could present a far greater challenge, as would trying to distinguish painters with very similar brushstrokes, like some of the 19th century Impressionists.
Keren said he plans to significantly expand his project to include far more artists, including ones who have adopted similar styles. As for trying to identify potential look-alikes, he said his program could begin by classifying paintings according to a general group — Impressionism versus Surrealism, for instance — and then sort within each group according to increasingly fine-tuned physical traits.
Keren is "cautiously optimistic" that his mathematical program might eventually be useful in detecting fakes. "It will be good to have a database of 20 van Gogh forgeries," he said, allowing the program's formulas to zero in on subtle, but perhaps telling, differences.
Successes in computer vision
Tomaso Poggio, co-director of the Center for Biological and Computational Learning at the Massachusetts Institute of Technology in Cambridge, said he wasn’t surprised that Keren’s computer program performed as well as it did in recognizing the paintings. Among the more recent successes in computer vision, Poggio said his group and others have created models that even replicated the first steps of human vision, as the initial wave of information is captured by the eye’s retina and sent up to the brain.
In Poggio’s test of that “immediate perception,” his computer model matched volunteers’ ability to detect whether an animal was present in a series of quickly flashed landscapes or cityscapes. “I don’t see why computer vision cannot do as well, eventually, as people do,” he said. “Whether it can do better depends on the task.”
Computers are particularly adept at recognizing fingerprints, for example. But forgeries? Poggio said he believes computers could become as reliable as art experts someday, but he doubts that a reliance on simple vision will ever yield perfection.
Beyond the mere act of seeing, humans can look around and describe a scene. Similarly, Poggio said detecting a forgery would require seeing the painting but also analyzing the artist’s brushwork, understanding the historical timeframe in which it was completed and other reasoned considerations. Detection, then, would require the equivalent of human intelligence, which Poggio conceded said is still far from being replicated in the laboratory.
Art specialists gathered in Boston earlier this month at the annual conference of the American Association for the Advancement of Science greeted the new research cautiously, noting the high bar that scientific techniques must clear to be accepted in a court of law, much less by other experts.
A high-stakes game
Researchers have struggled for years to sort out true Rembrandts from copies, for example, a task complicated by the artist’s propensity to switch styles and to encourage his best students to imitate him.
Last year, Narayan Khandekar, a senior conservation scientist at Harvard University’s Straus Center for Conservation, and two colleagues concluded that at least three disputed Jackson Pollock paintings likely aren’t authentic. But their report, based on a technical analysis of pigments in the artworks that suggested several weren’t commercially available until decades after Pollock’s death in 1956, only seemed to fuel the controversy.
The stakes may be especially high for artworks that, if authenticated, could earn their owners millions, said Jessica Darraby, a Los Angeles-based attorney, gallery owner and art specialist. “The art market is the hottest market in the world,” she said. “For a Pollock to be a Pollock in 2008 is a very big financial deal.”
Given the consequences, new validation methods will likely receive extreme scrutiny and no one method is likely to emerge as a “be-all, end-all approach,” Darraby said.
Nevertheless, Harvard’s Khandekar said he saw the value of trying to complement existing techniques with one based on computer vision. “I’m sure it will have potential,” he said, though he warned, “it’s in its infancy now.”
Could a computer ever be sophisticated enough to “appreciate” good art?
“Oh, boy,” Keren said, and laughed. Mathematical tools can certainly tell him if a student’s program has been designed well, he said. Then again, there’s a rather big gap between that and, say, quantifying the merits of van Gogh’s famous “Starry Night” or a classic novel such as “Crime and Punishment.”
“Frankly, I think we’re quite far from that,” Keren concluded.