Search Results for author: Sorelle Friedler

Found 6 papers, 2 papers with code

Energy and Carbon Considerations of Fine-Tuning BERT

no code implementations17 Nov 2023 Xiaorong Wang, Clara Na, Emma Strubell, Sorelle Friedler, Sasha Luccioni

Despite the popularity of the `pre-train then fine-tune' paradigm in the NLP community, existing work quantifying energy costs and associated carbon emissions has largely focused on language model pre-training.

Language Modelling

Shapley Residuals: Quantifying the limits of the Shapley value for explanations

no code implementations NeurIPS 2021 Indra Kumar, Carlos Scheidegger, Suresh Venkatasubramanian, Sorelle Friedler

Popular feature importance techniques compute additive approximations to nonlinear models by first defining a cooperative game describing the value of different subsets of the model's features, then calculating the resulting game's Shapley values to attribute credit additively between the features.

Attribute Feature Importance

Fair Meta-Learning: Learning How to Learn Fairly

no code implementations6 Nov 2019 Dylan Slack, Sorelle Friedler, Emile Givental

Data sets for fairness relevant tasks can lack examples or be biased according to a specific label in a sensitive attribute.

Attribute Fairness +1

Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data

1 code implementation24 Aug 2019 Dylan Slack, Sorelle Friedler, Emile Givental

Then, we illustrate the usefulness of both algorithms as a combined method for training models from a few data points on new tasks while using Fairness Warnings as interpretable boundary conditions under which the newly trained model may not be fair.

Fairness Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.