Anna Dreber, Fiona Fidler, Sarah Rajtmajer, Eva Vivalt
Moderator: Timothy Errington
Abstract
Can individuals and automated methods anticipate which findings are likely to replicate after reading the original paper or even just reviewing a subset of information about the finding and supporting evidence? Emerging evidence suggests human judgements and machine learning methods may provide relatively accurate assessments with an order (or orders) of magnitude lower resource investment than conducting replication or reproduction studies. How accurate and scalable are these methods? Are there any barriers or challenges to making these approaches achieve a broad application (e.g., disciplinary coverage, prediction of outcomes of novel research)? And, what risks and opportunities are there for using these emerging techniques when assessing research?