IE 11 is not supported. For an optimal experience visit our site on another browser.

A prestigious cancer institute is correcting dozens of papers and retracting others after a blogger cried foul

The criticism spotlights how artificial intelligence is playing a growing role in catching sloppy or dubious science.  
Dana-Farber Cancer Institute
The Dana-Farber Cancer Institute in Boston on Nov. 11, 2023. Craig F. Walker / Boston Globe via Getty Images file

The Dana-Farber Cancer Institute has requested the retraction of six studies and corrections in another 31 papers after a scathing critique drew attention to alleged errors a blogger and biologist said range from sloppiness to “really serious concerns.” 

The allegations — against top scientists at the prestigious Boston-based institute, which is a teaching affiliate of Harvard Medical School — put the institute at the center of a roiling debate about research misconduct, how to police scientific integrity and whether the organizational structure of academic science incentivizes shortcuts or cheating. 

The criticism also spotlights how artificial intelligence is playing a growing role in catching sloppy or dubious science.  

The allegations, which concern image duplications and manipulations in biomedical research, are similar to concerns aired last year against former Stanford University President Marc Tessier-Lavigne, who stepped down after an investigation

A biologist and blogger, Sholto David centered attention on Dana-Farber after he highlighted problems in a slew of studies from top researchers.

In early January, David detailed duplications and potentially misleading image edits across dozens of papers produced primarily by Dana-Farber researchers, writing in a blog post that research from top scientists at the institute “appears to be hopelessly corrupt with errors that are obvious from just a cursory reading.”

After the publication of David’s blog, Dr. Barrett Rollins, the institute’s integrity research officer, said in a statement emailed Wednesday that Dana-Farber scientists had requested that six manuscripts be retracted, that 31 manuscripts were in the process of being corrected and that one manuscript remained under examination. 

Rollins added that some of the papers flagged by David had already come up in “ongoing reviews” conducted previously by the institute.

“The presence of image discrepancies in a paper is not evidence of an author’s intent to deceive,” Rollins said. “That conclusion can only be drawn after a careful, fact-based examination which is an integral part of our response. Our experience is that errors are often unintentional and do not rise to the level of misconduct.”

Ellen Berlin, a communications director at Dana-Farber, wrote in an email that the allegations all concerned pure, or basic, science, as opposed to studies that led to cancer drug approvals. 

“Cancer treatment is not impacted in any way in the review of the Dana-Farber research papers,” Berlin wrote.

David is one of several sleuth scientists who read journal articles to seek out errors or fabrications. He compared his hobby to playing a game like “spot the difference” or completing a crossword.

“It’s a puzzle,” David said in an interview, adding that he enjoys looking at figures that show results of common biology experiments, like those involving cells, mice and western blots, a laboratory method that identifies proteins. 

“Of course, I do care about getting the science right,” he said.

Scientific errors in published work has been a focal point in the scientific community in recent years. The website Retraction Watch, a website that tracks withdrawn papers, counts more than 46,000 papers recorded in its database. The organization’s record of withdrawn work stretches back into the 1970s. A 2016 Nature article said more than 1 million papers in the biomedical field are published each year.  

The website PubPeer, which allows outside researchers to post critiques of research that has been peer-reviewed and printed in journals, is a popular forum for scientists to flag problems. David said he has written more than 1,000 anonymous critiques on the website. 

David said a trail of questionable science led him to Dana-Farber. In a prior investigation, David scrutinized the work of a Columbia University surgeon. He found flaws in the work of collaborators of the surgeon, which ultimately drew his attention toward the leadership team at Dana-Farber. 

David said he went through the leadership page of Dana-Farber’s website, checking the work of its top scientists and leaders. 

He found a slew of image errors, many of which could be explained by sloppy copy-paste work or a mix-up, but also others where images are stretched or rotated, which is more difficult to explain. Some errors were previously identified on PubPeer by other users. David combined those previous concerns with his own findings in a blog post taking aim at the institute. The Harvard Crimson, a student newspaper, was first to publish a news story about the accusations

David said images of mice in one paper looked like they had been digitally altered in ways that appeared intentional and could skew takeaways from the paper.  

“I don’t understand how that would come as an accident,” David said. 

Most of the errors are “less serious” and might have been accidents, he said. Still, the rash of mistakes, to David, indicates a broken research and review process if no one caught them before publication. 

“When you spot a duplication, that’s a symptom of a problem,” David said. 

Elisabeth Bik, a scientist who investigates image manipulation and research misconduct, said David’s work was credible.

“The allegations he’s raising are exactly the same thing I would raise. They’re spot on,” Bik said. 

Bik, who has been doing this type of sleuthing for about 10 years, said she is often frustrated by the lack of response from academic institutions when she flags errors. She said she was glad to see that Dana-Farber responded and had already taken proactive steps to correct the scientific record. 

“I’m very pleasantly surprised that the institute is taking action. I hope they will follow through with publishers,” Bik said. “I’ve reported many of these cases where nothing happened.”

In scientific communities, image manipulations have been under close watch, particularly after Stanford University’s Tessier-Lavigne stepped down from his post as the institution’s president after criticism of his past work in neuroscience. 

Tessier-Lavigne said he was cleared of fraud or falsifying data himself, but a probe found that members of his lab had inappropriately manipulated research data or engaged in “deficient scientific practices,” according to a report from a panel of outside researchers who evaluated the case. 

The report said Tessier-Lavigne’s lab culture rewarded junior scientists whose work produced favorable results and marginalized those who did not, a dynamic that could have caused young scientists to manipulate results and chase favor. 

Outside researchers said that type of culture is not uncommon at top institutions, where ambitious professors can lead sprawling laboratories with dozens of graduate students who are eager to please their superiors and who know publishing a splashy paper could rapidly advance their careers. 

Some scientists have grown increasingly concerned that limited opportunities for young scientists and a problematic system for publishing scientific work has incentivized corner-cutting for careerism. 

“There’s lots of incentive to produce mounds of research and publish in these high impact journals to make your name,” said Dr. Ferric Fang, a microbiologist and professor at the University of Washington. “We’re incentivizing this kind of behavior.”

Problems with images published in research are widespread. 

In a 2016 article published in the American Society of Microbiology, Bik and Fang evaluated images from more than 20,600 articles in 40 biomedical journals from 1995 to 2014. They found that about 3.8% of the journal articles contained “problematic figures” and that at least half of those had elements that were “suggestive of deliberate manipulation.” 

New tools are helping institutions and sleuths alike root out mistakes and potential misconduct. David used a program called ImageTwin to identify some of the questionable figures from Dana-Farber researchers. 

The artificial-intelligence-powered software can ingest a study, analyze its images and in about 15 seconds compare them against one another and also against about 50 million scientific images in its database, according to the ImageTwin’s co-founder Patrick Starke. 

The software has been commercially available since 2021. Starke, who is based in Vienna, said a few hundred academic organizations are using the tool to identify problems before publication. 

“It’s great if it’s caught and retracted, and it’s even better if it’s not published,” said Starke, who envisions the program used in academics with the same frequency as plagiarism checking tools that analyze text.

But Starke said it will be a challenge to stay ahead of those who cut corners or cheat. Studies have already shown that AI programs can generate realistic looking figures of common experiments like western blots, Starke said. His company is developing tools to look for AI-generated patterns in research images.

“If photos of faces can be realistically made by AI, it’s probably happening already in scientific literature,” Bik said. “That’s the next level of cheating. I’m not sure if we’re even ready for that.”