no code implementations • COLING 2022 • Rajiv Movva, Jinhao Lei, Shayne Longpre, Ajay Gupta, Chris DuBois
Our work quantitatively demonstrates that combining compression methods can synergistically reduce model size, and that practitioners should prioritize (1) quantization, (2) knowledge distillation, and (3) pruning to maximize accuracy vs. model size tradeoffs.
no code implementations • 1 Feb 2022 • Shayne Longpre, Julia Reisler, Edward Greg Huang, Yi Lu, Andrew Frank, Nikhil Ramesh, Chris DuBois
Studies of active learning traditionally assume the target and source data stem from a single domain.
1 code implementation • EMNLP 2021 • Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, Sameer Singh
To understand how models use these sources together, we formalize the problem of knowledge conflicts, where the contextual information contradicts the learned information.
no code implementations • WS 2019 • Shayne Longpre, Yi Lu, Zhucheng Tu, Chris DuBois
To produce a domain-agnostic question answering model for the Machine Reading Question Answering (MRQA) 2019 Shared Task, we investigate the relative benefits of large pre-trained language models, various data sampling strategies, as well as query and context paraphrases generated by back-translation.