Search Results for author: Clara Na

Found 4 papers, 1 papers with code

Energy and Carbon Considerations of Fine-Tuning BERT

no code implementations17 Nov 2023 Xiaorong Wang, Clara Na, Emma Strubell, Sorelle Friedler, Sasha Luccioni

Despite the popularity of the `pre-train then fine-tune' paradigm in the NLP community, existing work quantifying energy costs and associated carbon emissions has largely focused on language model pre-training.

Language Modelling

To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing

no code implementations11 Oct 2023 Sireesh Gururaja, Amanda Bertsch, Clara Na, David Gray Widder, Emma Strubell

NLP is in a period of disruptive change that is impacting our methodologies, funding sources, and public perception.

The Framework Tax: Disparities Between Inference Efficiency in NLP Research and Deployment

1 code implementation13 Feb 2023 Jared Fernandez, Jacob Kahn, Clara Na, Yonatan Bisk, Emma Strubell

In this work, we examine this phenomenon through a series of case studies analyzing the effects of model design decisions, framework paradigms, and hardware platforms on total model latency.

Computational Efficiency

Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models

no code implementations25 May 2022 Clara Na, Sanket Vaibhav Mehta, Emma Strubell

Model compression by way of parameter pruning, quantization, or distillation has recently gained popularity as an approach for reducing the computational requirements of modern deep neural network models for NLP.

Model Compression Quantization +3

Cannot find the paper you are looking for? You can Submit a new open access paper.