no code implementations • 17 Nov 2023 • Xiaorong Wang, Clara Na, Emma Strubell, Sorelle Friedler, Sasha Luccioni
Despite the popularity of the `pre-train then fine-tune' paradigm in the NLP community, existing work quantifying energy costs and associated carbon emissions has largely focused on language model pre-training.
no code implementations • NeurIPS 2021 • Indra Kumar, Carlos Scheidegger, Suresh Venkatasubramanian, Sorelle Friedler
Popular feature importance techniques compute additive approximations to nonlinear models by first defining a cooperative game describing the value of different subsets of the model's features, then calculating the resulting game's Shapley values to attribute credit additively between the features.
no code implementations • ICML 2020 • I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, Sorelle Friedler
Game-theoretic formulations of feature importance have become popular as a way to "explain" machine learning models.
no code implementations • 6 Nov 2019 • Dylan Slack, Sorelle Friedler, Emile Givental
Data sets for fairness relevant tasks can lack examples or be biased according to a specific label in a sensitive attribute.
1 code implementation • 24 Aug 2019 • Dylan Slack, Sorelle Friedler, Emile Givental
Then, we illustrate the usefulness of both algorithms as a combined method for training models from a few data points on new tasks while using Fairness Warnings as interpretable boundary conditions under which the newly trained model may not be fair.
2 code implementations • 11 Dec 2014 • Michael Feldman, Sorelle Friedler, John Moeller, Carlos Scheidegger, Suresh Venkatasubramanian
It might not be possible to disclose the process.