Search Results for author: Oana-Maria Camburu

Found 24 papers, 13 papers with code

Identifying Linear Relational Concepts in Large Language Models

no code implementations15 Nov 2023 David Chanin, Anthony Hunter, Oana-Maria Camburu

Transformer language models (LMs) have been shown to represent concepts as directions in the latent space of hidden activations.

Using Natural Language Explanations to Improve Robustness of In-context Learning for Natural Language Inference

no code implementations13 Nov 2023 Xuanli He, Yuxiang Wu, Oana-Maria Camburu, Pasquale Minervini, Pontus Stenetorp

Moreover, we introduce a new approach to X-ICL by prompting an LLM (ChatGPT in our case) with few human-generated NLEs to produce further NLEs (we call it ChatGPT few-shot), which we show superior to both ChatGPT zero-shot and human-generated NLEs alone.

Natural Language Inference

KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations

no code implementations5 Jun 2023 Myeongjun Jang, Bodhisattwa Prasad Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu

While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs.

Adversarial Attack

Logical Reasoning for Natural Language Inference Using Generated Facts as Atoms

no code implementations22 May 2023 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei

We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline.

Logical Reasoning Natural Language Inference +1

Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns

no code implementations11 Feb 2023 Zhongbin Xie, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu

Bias-measuring datasets play a critical role in detecting biased behavior of language models and in evaluating progress of bias mitigation methods.

coreference-resolution Data Augmentation

Rationalizing Predictions by Adversarial Information Calibration

no code implementations15 Jan 2023 Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz

One form of explanation for a prediction is an extractive rationale, i. e., a subset of features of an instance that lead the model to give its prediction on that instance.

Language Modelling Sentiment Analysis +2

Explaining Chest X-ray Pathologies in Natural Language

1 code implementation9 Jul 2022 Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons, Bartlomiej Papiez, Thomas Lukasiewicz

Most deep learning algorithms lack explanations for their predictions, which limits their deployment in clinical practice.

Explainable Models

Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations

no code implementations25 Jun 2021 Bodhisattwa Prasad Majumder, Oana-Maria Camburu, Thomas Lukasiewicz, Julian McAuley

Our framework improves over previous methods by: (i) reaching SOTA task performance while also providing explanations, (ii) providing two types of explanations, while existing models usually provide only one type, and (iii) beating by a large margin the previous SOTA in terms of quality of both types of explanations.

Decision Making

e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks

2 code implementations ICCV 2021 Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, Thomas Lukasiewicz

e-ViL is a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks.

Language Modelling Text Generation

Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration

no code implementations16 Dec 2020 Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz

We use an adversarial-based technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.

Language Modelling Sentiment Analysis

The Gap on GAP: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets

1 code implementation3 Nov 2020 Vid Kocijan, Oana-Maria Camburu, Thomas Lukasiewicz

For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies.

coreference-resolution Test

Explaining Deep Neural Networks

no code implementations4 Oct 2020 Oana-Maria Camburu

The first direction consists of feature-based post-hoc explanatory methods, that is, methods that aim to explain an already trained and fixed model (post-hoc), and that provide explanations in terms of input features, such as tokens for text and superpixels for images (feature-based).

Decision Making speech-recognition +2

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

1 code implementation23 Sep 2020 Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom

For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions.

Decision Making Fairness

e-SNLI-VE: Corrected Visual-Textual Entailment with Natural Language Explanations

3 code implementations7 Apr 2020 Virginie Do, Oana-Maria Camburu, Zeynep Akata, Thomas Lukasiewicz

The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning.

Natural Language Inference

Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations

1 code implementation ACL 2020 Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom

To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.

Decision Making Natural Language Inference

Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods

2 code implementations4 Oct 2019 Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom

We aim for this framework to provide a publicly available, off-the-shelf evaluation when the feature-selection perspective on explanations is needed.

feature selection

A Surprisingly Robust Trick for the Winograd Schema Challenge

no code implementations ACL 2019 Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.

Language Modelling Natural Language Understanding +1

A Surprisingly Robust Trick for Winograd Schema Challenge

2 code implementations15 May 2019 Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.

Common Sense Reasoning Language Modelling +2

e-SNLI: Natural Language Inference with Natural Language Explanations

2 code implementations NeurIPS 2018 Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom

In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time.

Natural Language Inference Test

Cannot find the paper you are looking for? You can Submit a new open access paper.