Balázs Aczél, Rotem Botvinik-Nezer, Wilson Cyrus-Lai, Eric Uhlmann
Moderator: Nate Breznau
Abstract
“Many analyst” research projects, where many research teams test the same hypothesis using identical data, offer new insights into the reliability of scientific research, the depth and complexity of researcher degrees of freedom, and potential to understand more about the data-generating model. These projects have been conducted across scientific fields, such as behavioral sciences, social sciences, and neuroscience, demonstrating that results substantially vary as a function of analytical choices. In this discussion, we bring together scholars who have organized such projects to discuss the pros and cons of this type of research, and talk about what they learned from conducting studies that involve the crowdsourcing of researchers. We also discuss how this might improve the transparency of research and what the future might hold for science given the findings of these studies. In particular, we will discuss whether we should take the findings as a crisis, a great opportunity, or perhaps both.