no code implementations • 17 Nov 2023 • Xiaorong Wang, Clara Na, Emma Strubell, Sorelle Friedler, Sasha Luccioni
Despite the popularity of the `pre-train then fine-tune' paradigm in the NLP community, existing work quantifying energy costs and associated carbon emissions has largely focused on language model pre-training.
no code implementations • 11 Oct 2023 • Sireesh Gururaja, Amanda Bertsch, Clara Na, David Gray Widder, Emma Strubell
NLP is in a period of disruptive change that is impacting our methodologies, funding sources, and public perception.
1 code implementation • 13 Feb 2023 • Jared Fernandez, Jacob Kahn, Clara Na, Yonatan Bisk, Emma Strubell
In this work, we examine this phenomenon through a series of case studies analyzing the effects of model design decisions, framework paradigms, and hardware platforms on total model latency.
no code implementations • 25 May 2022 • Clara Na, Sanket Vaibhav Mehta, Emma Strubell
Model compression by way of parameter pruning, quantization, or distillation has recently gained popularity as an approach for reducing the computational requirements of modern deep neural network models for NLP.