A Siren Song of Open Source Reproducibility

9 Apr 2022  ·  Edward Raff, Andrew L. Farris ·

As reproducibility becomes a greater concern, conferences have largely converged to a strategy of asking reviewers to indicate whether code was attached to a submission. This is part of a larger trend of taking action based on assumed ideals, without studying if those actions will yield the desired outcome. Our argument is that this focus on code for replication is misguided if we want to improve the state of reproducible research. This focus can be harmful -- we should not force code to be submitted. There is a lack of evidence for effective actions taken by conferences to encourage and reward reproducibility. We argue that venues must take more action to advance reproducible machine learning research today.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here