Search Results for author: Alon Jacovi

Found 13 papers, 6 papers with code

Diagnosing AI Explanation Methods with Folk Concepts of Behavior

no code implementations27 Jan 2022 Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg, Katja Filippova

When explaining AI behavior to humans, how is the communicated information being comprehended by the human explainee, and does it match what the explanation attempted to communicate?

Human Interpretation of Saliency-based Explanation Over Text

1 code implementation27 Jan 2022 Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, Ngoc Thang Vu

In this work, we focus on this question through a study of saliency-based explanations over textual data.

Contrastive Explanations for Model Interpretability

1 code implementation EMNLP 2021 Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, Yoav Goldberg

Our method is based on projecting model representation to a latent space that captures only the features that are useful (to the model) to differentiate two potential decisions.

Text Classification

Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

no code implementations15 Oct 2020 Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg

We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i. e., trust between people).

Exposing Shallow Heuristics of Relation Extraction Models with Challenge Data

1 code implementation EMNLP 2020 Shachar Rosenman, Alon Jacovi, Yoav Goldberg

The process of collecting and annotating training data may introduce distribution artifacts which may limit the ability of models to learn correct generalization behavior.

Question Answering Relation Extraction

Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals

no code implementations1 Jun 2020 Yanai Elazar, Shauli Ravfogel, Alon Jacovi, Yoav Goldberg

In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.

Aligning Faithful Interpretations with their Social Attribution

1 code implementation1 Jun 2020 Alon Jacovi, Yoav Goldberg

We find that the requirement of model interpretations to be faithful is vague and incomplete.

Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?

no code implementations ACL 2020 Alon Jacovi, Yoav Goldberg

With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems.

Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning

1 code implementation EACL 2021 Alon Jacovi, Gang Niu, Yoav Goldberg, Masashi Sugiyama

We consider the situation in which a user has collected a small set of documents on a cohesive topic, and they want to retrieve additional documents on this topic from a large collection.

Information Retrieval

Neural network gradient-based learning of black-box function interfaces

no code implementations ICLR 2019 Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, Jonathan Berant

We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.

Estimate and Replace: A Novel Approach to Integrating Deep Neural Networks with Existing Applications

no code implementations24 Apr 2018 Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, Alon Jacovi

At inference time, we replace each estimator with its existing application counterpart and let the base network solve the task by interacting with the existing application.

Cannot find the paper you are looking for? You can Submit a new open access paper.