1 code implementation • 3 Feb 2023 • Gabriel Orlanski, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua Howland, Jonathan Malmaud, Jacob Austin, Rishabh Singh, Michele Catasta
Training a model on a balanced corpus results in, on average, 12. 34% higher $pass@k$ across all tasks and languages compared to the baseline.
4 code implementations • 7 May 2021 • Amirali Abdolrashidi, Lisa Wang, Shivani Agrawal, Jonathan Malmaud, Oleg Rybakov, Chas Leichner, Lukasz Lew
In this work, we use ResNet as a case study to systematically investigate the effects of quantization on inference compute cost-quality tradeoff curves.
no code implementations • CONLL 2020 • Jonathan Malmaud, Roger Levy, Yevgeni Berzak
In this work, we analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question, and whether this signal can be beneficial for machine reading comprehension.
1 code implementation • ACL 2020 • Yevgeni Berzak, Jonathan Malmaud, Roger Levy
We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions.
1 code implementation • 5 Mar 2015 • Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, Kevin Murphy
We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task.
no code implementations • 8 Apr 2013 • Dan Lovell, Jonathan Malmaud, Ryan P. Adams, Vikash K. Mansinghka
Applied to mixture modeling, our approach enables the Dirichlet process to simultaneously learn clusters that describe the data and superclusters that define the granularity of parallelization.