Resources
Authors & Affiliations
Abstract
In preclinical neuroscience, we are increasingly dependent on large-scale methods for evidence synthesis. With several million biomedical papers published every year, no one researcher can stay on top of the published literature. Instead we employ tools like systematic reviews to distill this information down. These methods rely on the published information being truthful, however. What happens when the evidence we collect is not? In a systematic review concerning the use of chronic unpredictable stress for modeling depression in rats, we screened 1,035 published papers for inconsistencies in their images (photomicrographs, blots, gels, etc.). The reports were manually screened with the aid of a machine-learning tool. In total, 19 % of the papers that displayed primary data in the form of images were found to have issues. These problems ranged from simple inconsistencies (possibly honest mistakes made in preparing the report), to outright image manipulation and fabrication. Importantly, a majority of the problems appeared to stem from conscious efforts at misleading the reader. Some of the studies may never even have taken place. Reports with problematic images were not cited less or published in lower-impact journals, nor were their authors isolated to any specific geographic area. Moreover, studies with problematic images reported larger effect sizes, on average, than did studies where we found no issues. The prevalence of problematic studies greatly undermines evidence synthesis within our research field. There is currently a pressing need for new methods for identifying and dealing with problematic papers in preclinical systematic reviews.