Last year the journal Nature reported an alarming increase in the number of retractions of scientific papers — a tenfold rise in the previous decade, to more than 300 a year across the scientific literature.
Other studies have suggested that most of these retractions resulted from honest errors. But a deeper analysis of retractions, being published this week, challenges that comforting assumption.
In the new study, published in the Proceedings of the National Academy of Sciences, two scientists and a medical communications consultant analyzed 2,047 retracted papers in the biomedical and life sciences.
They found that misconduct was the reason for three-quarters of the retractions for which they could determine the cause.
“We found that the problem was a lot worse than we thought,” said an author of the study, Dr. Arturo Casadevall of Albert Einstein College of Medicine in the Bronx.
Dr. Casadevall and another author, Dr. Ferric C. Fang of the University of Washington, have been outspoken critics of the current culture of science. To them, the rising rate of retractions reflects perverse incentives that drive scientists to make sloppy mistakes or even knowingly publish false data.
“We realized we would really like more hard data for what the reasons were for retractions,” Dr. Fang said.
They began collaborating with R. Grant Steen, a medical communications consultant in Chapel Hill, N.C., who had already published a study on 10 years of retractions. Together they gathered all the retraction notices published before May 2012 by searching PubMed, a database of scientific literature maintained by the National Library of Medicine.
“I guess our O.C.D. kicked in and we started trying to look at every paper we could look at,” Dr. Fang said.
The researchers analyzed the reasons for retractions cited by the scientific journals. But they also looked beyond the journals for the full story.
In the mid-2000s, for example, Boris Cheskis, then a senior scientist at Wyeth Research, and his colleagues published two papers on estrogen. Later, the scientists retracted both papers, explaining that some of the data in them were “unreliable.” In 2010, the Office of Research Integrity at the federal Department of Health and Human Services ruled that Dr. Cheskis had engaged in misconduct, having falsified the figures.
Dr. Cheskis settled with the government. Although he neither accepted nor denied the charges, he agreed not to serve on any advisory boards for the United States Public Health Service and agreed to be supervised on any Public Health Service-financed research for two years.
Neither of the notices for the two retracted papers has been updated to reflect the finding of fraud. Dr. Cheskis could not be reached for comment.
Dr. Fang and his colleagues dug through other reports from the Office of Research Integrity, as well as newspaper articles and the blog Retraction Watch. All told, they reclassified 158 papers as fraudulent based on their extra research.
“We haven’t seen this level of analysis before,” said Dr. Ivan Oransky, an author of Retraction Watch and the executive editor at Reuters Health. “It confirms what we suspected.”
Dr. Oransky said he expected the rise to continue in the near future. He and his co-author, Adam Marcus, have been scrambling to keep up with new cases of fraud.
In July, for example, the Japanese Society of Anesthesiologists reported that Dr. Yoshitaka Fujii had falsified data in 172 papers. Most of those papers have yet to be officially retracted. “They’re headed for the fraud pile,” Dr. Oransky said.
Dr. Benjamin G. Druss, a professor of health policy of Emory University, said he found the statistics in the paper to be sound but added that they “need to be kept in perspective.” Only about one in 10,000 papers in PubMed have been officially retracted, he noted. By contrast, 112,908 papers have had published corrections.
Dr. Casadevall disagreed. “It convinces me more that we have a problem in science,” he said.
While the fraudulent papers may be relatively few, he went on, their rapid increase is a sign of a winner-take-all culture in which getting a paper published in a major journal can be the difference between heading a lab and facing unemployment. “Some fraction of people are starting to cheat,” he said.
Better policing techniques, like plagiarism-detecting software, might help slow the rise in misconduct, Dr. Casadevall said, but the most important thing the scientific community can do is change its culture.
“I don’t think this problem is going to go away as long as you have this disproportionate system of rewards,” he said.