no code implementations • NAACL (PrivateNLP) 2021 • Shlomo Hoory, Amir Feder, Avichai Tendler, Sofia Erell, Alon Peled-Cohen, Itay Laish, Hootan Nakhost, Uri Stemmer, Ayelet Benjamini, Avinatan Hassidim, Yossi Matias
One method to guarantee the privacy of such individuals is to train a differentially-private language model, but this usually comes at the expense of model performance.
no code implementations • 11 Oct 2023 • Ariel Goldstein, Eric Ham, Mariano Schain, Samuel Nastase, Zaid Zada, Avigail Dabush, Bobbi Aubrey, Harshvardhan Gazula, Amir Feder, Werner K Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Roi Reichart, Michael Brenner, Avinatan Hassidim, Orrin Devinsky, Adeen Flinker, Omer Levy, Uri Hasson
Our results reveal a connection between human language processing and DLMs, with the DLM's layer-by-layer accumulation of contextual information mirroring the timing of neural activity in high-order language areas.
no code implementations • 1 Oct 2023 • Yair Gat, Nitay Calderon, Amir Feder, Alexander Chapanin, Amit Sharma, Roi Reichart
We hence present a second approach based on matching, and propose a method that is guided by an LLM at training-time and learns a dedicated embedding space.
1 code implementation • NeurIPS 2023 • Nino Scherrer, Claudia Shi, Amir Feder, David M. Blei
(2) We apply this method to study what moral beliefs are encoded in different LLMs, especially in ambiguous cases where the right choice is not obvious.
1 code implementation • 31 May 2023 • Carolina Zheng, Claudia Shi, Keyon Vafa, Amir Feder, David M. Blei
In this paper, we show that the performance of controlled generation may be poor if the distributions of text in response to user prompts differ from the distribution the predictor was trained on.
no code implementations • 25 Oct 2022 • Gal Yona, Amir Feder, Itay Laish
An important component in deploying machine learning (ML) in safety-critic applications is having a reliable measure of confidence in the ML model's predictions.
no code implementations • 28 Jul 2022 • Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, Yoav Goldberg
Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.
no code implementations • 1 Jun 2022 • Amir Feder, Guy Horowitz, Yoav Wald, Roi Reichart, Nir Rosenfeld
Accurately predicting the relevance of items to users is crucial to the success of many social platforms.
1 code implementation • 27 May 2022 • Eldar David Abraham, Karel D'Oosterlinck, Amir Feder, Yair Ori Gat, Atticus Geiger, Christopher Potts, Roi Reichart, Zhengxuan Wu
We introduce CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP).
1 code implementation • ACL 2022 • Nitay Calderon, Eyal Ben-David, Amir Feder, Roi Reichart
Natural language processing (NLP) algorithms have become very successful, but they still struggle when applied to out-of-distribution examples.
1 code implementation • 2 Sep 2021 • Amir Feder, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brandon M. Stewart, Victor Veitch, Diyi Yang
A fundamental goal of scientific research is to learn about causal relationships.
no code implementations • ACL 2021 • Daniel Rosenberg, Itai Gat, Amir Feder, Roi Reichart
Deep learning algorithms have shown promising results in visual question answering (VQA) tasks, but a more careful look reveals that they often do not understand the rich signal they are being fed with.
no code implementations • NeurIPS 2021 • Yoav Wald, Amir Feder, Daniel Greenfeld, Uri Shalit
In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization.
1 code implementation • 18 Jan 2021 • Guy Rotman, Amir Feder, Roi Reichart
Recent improvements in the predictive quality of natural language processing systems are often dependent on a substantial increase in the number of model parameters.
1 code implementation • CL (ACL) 2021 • Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart
Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance.
2 code implementations • CL (ACL) 2020 • Nadav Oved, Amir Feder, Roi Reichart
We find that our best performing textual model is most associated with topics that are intuitively related to each prediction task and that better models yield higher correlation with more informative topics.
no code implementations • 16 Jul 2019 • Yash Goyal, Amir Feder, Uri Shalit, Been Kim
To overcome this problem, we define the Causal Concept Effect (CaCE) as the causal effect of (the presence or absence of) a human-interpretable concept on a deep neural net's predictions.