no code implementations • 12 Dec 2023 • Marc-Etienne Brunet, Ashton Anderson, Richard Zemel
Large pretrained language models (LLMs) can be rapidly adapted to a wide variety of tasks via a text-to-text approach, where the instruction and input are fed to the model in natural language.
no code implementations • 25 Mar 2020 • Elnaz Barshan, Marc-Etienne Brunet, Gintare Karolina Dziugaite
In this work, we focus on the use of influence functions to identify relevant training examples that one might hope "explain" the predictions of a machine learning model.
2 code implementations • 8 Oct 2018 • Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, Richard Zemel
Given a word embedding trained on a corpus, our method identifies how perturbing the corpus will affect the bias of the resulting embedding.