ML Reproducibility Challenge 2022

Welcome to the ML Reproducibility Challenge 2022. This is the sixth edition of the event (v1, v2, v3, v4, v5), where we are accepting reproducibility reports on papers published at eleven top ML conferences, including NeurIPS 2022, ICML 2022, ICLR 2022, ACL 2022, EMNLP 2022, CVPR 2022, ECCV 2022, AAAI 2022, IJCAI-ECAI 2022, ACM FAccT 2022, SIGIR 2022, and also for papers published in top ML journals in 2022, including JMLR, TACL and TMLR.

The primary goal of this event is to encourage the publishing and sharing of scientific results that are reliable and reproducible. In support of this, the objective of this challenge is to investigate reproducibility of papers accepted for publication at top conferences by inviting members of the community at large to select a paper, and verify the empirical results and claims in the paper by reproducing the computational experiments, either via a new implementation or using code/data or other information provided by the authors.

News

  • [23/01/2023] Call for reviewers for RC2022 is out. Please sign up to be a reviewer for the challenge in this form. Reviewers are also eligible for GCP credit awards thanks to our sponsor Kaggle.
  • [20/12/2022] Kaggle announces $500k worth of awards for the top publications at MLRC2022.
  • [29/11/2022] Accepted reports from MLRC 2021 was featured in in-person and virtual poster sessions at NeurIPS 2022, New Orleans, USA. Checkout the announcement from NeurIPS Journal Chairs for more information.

Key Dates

  • Announcement of the challenge: August 18th, 2022
  • Submission deadline: February 3rd, 2023 (11:59PM AOE), platform: OpenReview.
  • Author notification deadline for ReScience Journal special issue: April 14th, 2023 April 21st, 2023
  • Camera Ready OpenReview update deadline: May 19th, 2023
  • Deadline to submit Kaggle notebook for the Kaggle Awards: June 1st, 2023
  • Announcement of Best Paper and Kaggle Awards: June 15th, 2023

Invitation to Participate

The challenge is a great event for community members to participate in shaping scientific practices and findings in our field. We particularly encourage participation from:

  • Course instructors of advanced ML, NLP, CV courses, who can use this challenge as a course assignment or project.
  • Organizers of hackathons.
  • Members of ML developer communities
  • ML enthusiasts everywhere!

How to participate

  • Check the Registration page for details on the conferences we cover, and then start working on a published paper from the list of conferences.
  • Check the Task Description page for more details on the task description
  • Check the Resources page for available resources
  • You can find answers to common question in our Frequently Asked Questions section.
  • Keep an eye out for the Important dates and deadlines
  • Submit your report in our OpenReview Portal.

Participating Courses

If you are an instructor participating in RC2022 with your course, we would love to hear from you and will be happy to list your course here! Please fill the following form with your course details: https://forms.gle/NsxypsS2MTxNCj8f7.

Contact Information

For general queries regarding the challenge, mail us at reproducibility.challenge@gmail.com.

Organizing Committee

Acknowledgements

  • Melisa Bok, Celeste Martinez Gomez, Mohit Uniyal, Parag Pachpute, Andrew McCallum (OpenReview / University of Massachusetts Amherst)
  • Nicolas Rougier, Konrad Hinsen (ReScience)