ScreenIT: Can we use automated screening tools to improve reporting in scientific papers?

Abstract

Suboptimal reporting practices are widespread in preprints and published papers. One major barrier to improving reporting is the lack of an efficient way to provide authors with feedback. Members of the ‘Automated Screening Working Group’ have worked to address this problem by combining many screening tools into a single pipeline, called ScreenIT. ScreenIT includes automated tools that check scientific papers for limitations sections, reporting of participant’s sex, blinding randomization, power calculations, ethics statements, retracted citations, common data visualization problems and other factors. The tools use text mining, natural language processing, and computer vision algorithms. During the pandemic, we’ve used ScreenIT to automatically screen more that 17,000 bioRxiv and medRxiv COVID-19 preprints. Public reports are posted in hypothes.is and tweeted out via @SciScoreReports. This session will explore the use of automated screening to raise awareness about common reporting problems and help authors to improve their manuscripts. We’ll provide an overview of the rationale for using automated screening, the strengths and limitations of this approach, and the ScreenIT pipeline structure. We’ll also share results and lessons learned, including common reporting issues identified in COVID-19 preprints and author responses to the reports. Finally, we’ll examine how author feedback has informed our efforts to improve ScreenIT, examine some tools in development and share plans for future meta-research using ScreenIT. We hope that this session will encourage others to join our working group and contribute to this collaborative effort to develop and deploy automated screening tools that aid authors in improving their manuscripts.