ML Reproducibility Challenge 2020 and Spring 2021
Welcome to the ML Reproducibility Challenge 2021 Spring Edition! This is the fifth edition of this
event, and a successor of the ML Reproducibility Challenge 2020 (see V1,
and we are excited this year to broaden our coverage
of conferences and papers to cover several new top venues, including:
The primary goal of this event is to encourage the publishing and sharing of scientific results that
are reliable and reproducible. In support of this, the objective of this challenge is to investigate
reproducibility of papers accepted for publication at top conferences by inviting members of the
community at large to select a paper, and verify the empirical results and claims in the paper by
reproducing the computational experiments, either via a new implementation or using code/data or
other information provided by the authors.
All submitted reports will be peer reviewed and shown next to the original papers on
Papers with Code.
Reports will be peer-reviewed via OpenReview. Every year, a small number of these reports,
selected for their clarity, thoroughness, correctness and insights, are selected for publication
in a special edition of the journal ReScience.
- Deadline extended for RC2021 Spring Edition to July 20th, 2021
- RC2020 Accepted papers now published in ReScience C Journal, Volume 7, Issue 2.
- Announcing a new edition of ML Reproducibility Challenge - Spring 2021! New dates and OpenReview page are updated here.
- Decisions are out for ML Reproducibility Challenge 2020! 23 papers accepted for recommendation for ReScience-C Journal edition. Check out the accepted papers here.
- Reviewing has commenced for papers submitted to RC2020. You can view the submitted reports in our OpenReview portal.
- Comments are now enabled on all listed papers. Authors of listed papers can now subscribe to recieve notifications about claims and comments on their papers!
Invitation to participate
The challenge is a great event for community members to participate in shaping scientific practices
and findings in our field. We particularly encourage participation from:
- Course instructors of advanced ML, NLP, CV courses, who can use this challenge as a
course assignment or project.
- Organizers of hackathons.
- Members of ML developer communities
- ML enthusiasts everywhere!
Key dates for Spring 2021 Challenge
Announcement of the challenge : April 15th, 2021
Challenge goes LIVE : May 1st, 2021
Submission deadline (to be considered for peer review) :
July 15th, 2021 (11:59PM PDT) July 20th, 2021 (11:59PM PDT)
Author Notification deadline for journal special issue: September 30th, 2021
How to participate
Courses Participating in RC2021 Spring Edition
- Participating with your course? Drop us a mail to include your course here.
Top participating universities in RC2020
- Fairness, Accountability, Confidentiality and Transparency in AI, University of Amsterdam, Netherlands New! Read about their experience of participating in RC2020 in this blog post.
- CS691 Advanced Machine Learning, Indian Institute of Technology, Gandhinagar, India
- University of Waterloo, Canada
- BITS Pilani, India
- University of Wisconsin Madison, USA
- KTH Royal Institute of Technology, Stockholm, Sweden
- IFT 6268 - Self-supervised Representation Learning, Université de Montréal, Canada
- ... and many more!
Koustuv Sinha (McGill University / FAIR)
Joelle Pineau (McGill University / FAIR)
Jessica Forde (Brown University)
Jesse Dodge (Allen Institute for AI)
- Sasha Luccioni (Université de Montréal / Mila)
Robert Stojnic (Papers with Code / FAIR)
Parag Pachpute, Melisa Bok, Celeste Martinez Gomez, Mohit Uniyal, Andrew McCallum (OpenReview / University of Massachusetts Amherst)
Nicolas Rougier, Konrad Hinsen (ReScience)
Shoutout to our amazing reviewers for RC2020!
We thank our amazing cohort of reviewers who provided timely and constructive reviews to make RC2020 a success! We would like to thank each and everyone of you for your support, and hope you will continue supporting us in the future!
- Emergency Reviewers: Koustuv Sinha,Linh Tran,Olga Isupova,Swetha Sirnam,Bharathi Srinivasan,Otasowie Owolafe
- All Reviewers: Marin Misur,Abhinav Agarwalla,Akshita Gupta,Ali Hürriyetoğlu,Andreas Ruttor,Andrew Drozdov,Anis Zahedifard,Arna Ghosh,Azin Shamshirgaran,Chao Qin,Charbel Sakr,David Arbour,Di He,Dmitriy Serdyuk,Donghyeon Cho,Dylan Hadfield-Menell,Emmanuel Bengio,Ernest K. Ryu,Fan Feng,Fernando Martínez-Plumed,Gagana B,Georgios Leontidis,Haitian Sun,Hanna Suominen,Hao He,Heng Fang,Huaibo Huang,Huseyin Coskun,Ishani Vyas,JIAKAI ZHANG,Jiangwen Sun,Jie Fu,Jitong Chen,John Frederick Wieting,Kanika Madan,Katherine Lee,Kaushy Kularatnam,Leo M Lahti,Leonid Kholkine,Levent Sagun,Li cheng,Lijun Wu,Lluis Castrejon,Mahzad Khoshlessan,Maneesh Kumar Singh,Maria Maistro,Marin Misur,Massimiliano Mancini,Matthew Kyle Schlegel,Matthew Ryan Krause,Maxime Wabartha,Maxwell D Collins,Md Imbesat Hassan Rizvi,Michal Drozdzal,Mingrui Liu,Monjoy Saha,Nikolaos Vasiloglou,Olivier Delalleau,Pablo Robles-Granda,Pascal Lamblin,Patrick Philipp,Paul Tylkin,Peter Henderson,Praveen Narayanan,Radha Chitta,Rajanie Prabha,Sadid A. Hasan,Samira Shaikh,Sandhya Prabhakaran,Sepehr Janghorbani,Shuai Kyle Zheng,Siwei Wang,Steffen Udluft,Sunnie S. Y. Kim,Tammo Rukat,Taniya Seth,Tobias Uelwer,Ujjwal Verma,Vibha Belavadi,Víctor Campos,Wenbin Zhang,Wenhao Yu,Xavier Bouthillier,Xavier Sumba,Xiang Zhang,Xin Guo,Yufei Han,Yuntian Deng,Zhangjie Cao,Chuan Li,Melanie F. Pradier,Marija Stanojevic,Clement Laroche,Fatemeh Koochaki,Mani A,Marija Stanojevic,Neal Fultz,Opeyemi Osakuade,Prasad Sudhakara Murthy,Satya Prakash Dash,Seohyun Kim,Xiao Zhang