Search Results for author: Amir Feder

Found 17 papers, 8 papers with code

Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals

no code implementations1 Oct 2023 Yair Gat, Nitay Calderon, Amir Feder, Alexander Chapanin, Amit Sharma, Roi Reichart

We hence present a second approach based on matching, and propose a method that is guided by an LLM at training-time and learns a dedicated embedding space.

counterfactual Language Modelling +1

Evaluating the Moral Beliefs Encoded in LLMs

1 code implementation NeurIPS 2023 Nino Scherrer, Claudia Shi, Amir Feder, David M. Blei

(2) We apply this method to study what moral beliefs are encoded in different LLMs, especially in ambiguous cases where the right choice is not obvious.

Moral Scenarios

An Invariant Learning Characterization of Controlled Text Generation

1 code implementation31 May 2023 Carolina Zheng, Claudia Shi, Keyon Vafa, Amir Feder, David M. Blei

In this paper, we show that the performance of controlled generation may be poor if the distributions of text in response to user prompts differ from the distribution the predictor was trained on.

Language Modelling Large Language Model +1

Useful Confidence Measures: Beyond the Max Score

no code implementations25 Oct 2022 Gal Yona, Amir Feder, Itay Laish

An important component in deploying machine learning (ML) in safety-critic applications is having a reliable measure of confidence in the ML model's predictions.

Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions

no code implementations28 Jul 2022 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, Yoav Goldberg

Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.

In the Eye of the Beholder: Robust Prediction with Causal User Modeling

no code implementations1 Jun 2022 Amir Feder, Guy Horowitz, Yoav Wald, Roi Reichart, Nir Rosenfeld

Accurately predicting the relevance of items to users is crucial to the success of many social platforms.

Recommendation Systems

DoCoGen: Domain Counterfactual Generation for Low Resource Domain Adaptation

1 code implementation ACL 2022 Nitay Calderon, Eyal Ben-David, Amir Feder, Roi Reichart

Natural language processing (NLP) algorithms have become very successful, but they still struggle when applied to out-of-distribution examples.

counterfactual Domain Adaptation

Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions

no code implementations ACL 2021 Daniel Rosenberg, Itai Gat, Amir Feder, Roi Reichart

Deep learning algorithms have shown promising results in visual question answering (VQA) tasks, but a more careful look reveals that they often do not understand the rich signal they are being fed with.

Question Answering Visual Question Answering

On Calibration and Out-of-domain Generalization

no code implementations NeurIPS 2021 Yoav Wald, Amir Feder, Daniel Greenfeld, Uri Shalit

In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization.

Domain Generalization

Model Compression for Domain Adaptation through Causal Effect Estimation

1 code implementation18 Jan 2021 Guy Rotman, Amir Feder, Roi Reichart

Recent improvements in the predictive quality of natural language processing systems are often dependent on a substantial increase in the number of model parameters.

Domain Adaptation Model Compression +3

CausaLM: Causal Model Explanation Through Counterfactual Language Models

1 code implementation CL (ACL) 2021 Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart

Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance.


Predicting In-game Actions from Interviews of NBA Players

2 code implementations CL (ACL) 2020 Nadav Oved, Amir Feder, Roi Reichart

We find that our best performing textual model is most associated with topics that are intuitively related to each prediction task and that better models yield higher correlation with more informative topics.

text-classification Text Classification

Explaining Classifiers with Causal Concept Effect (CaCE)

no code implementations16 Jul 2019 Yash Goyal, Amir Feder, Uri Shalit, Been Kim

To overcome this problem, we define the Causal Concept Effect (CaCE) as the causal effect of (the presence or absence of) a human-interpretable concept on a deep neural net's predictions.

Causal Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.