The reliability of replications: a study in computational reproductions

March 19, 2025

This study investigates researcher variability in computational reproduction, an activity for which it is least expected. Eighty-five independent teams attempted numerical replication of results from an original study of policy preferences and immigration. Reproduction teams were randomly grouped into a ‘transparent group’ receiving original study and code or ‘opaque group’ receiving only a method and results description and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9 and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group was reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers ‘doing reproduction’ would help, implying a need for better training. We also urge increased awareness of complexity in the research process and in ‘push button’ replications.

This landmark study represents one of the largest systematic investigations of computational reproducibility, involving 85 research teams attempting to reproduce the same published results. The findings reveal alarming levels of variability even in supposedly straightforward computational reproduction tasks, highlighting fundamental challenges in scientific reliability. The study’s key insight—that transparency helps but isn’t sufficient—has profound implications for how we conduct and evaluate scientific research. The identification of six distinct sources of error (mistakes, procedural variations, missing components, interpretational differences, and questionable method knowledge) provides a roadmap for improving reproducibility practices across the sciences.