no code implementations • 16 Oct 2024 • Maxime Kayser, Bayar Menzat, Cornelius Emde, Bogdan Bercean, Alex Novak, Abdala Espinosa, Bartlomiej W. Papiez, Susanne Gaube, Thomas Lukasiewicz, Oana-Maria Camburu
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
no code implementations • 12 Sep 2024 • Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser
iLLuMinaTE navigates three main stages - causal connection, explanation selection, and explanation presentation - with variations drawing from eight social science theories (e. g. Abnormal Conditions, Pearl's Model of Explanation, Necessity and Robustness Selection, Contrastive Explanation).
no code implementations • 4 Apr 2024 • Noah Y. Siegel, Oana-Maria Camburu, Nicolas Heess, Maria Perez-Ortiz
In this work, we introduce Correlational Explanatory Faithfulness (CEF), a metric that can be used in faithfulness tests based on input interventions.
2 code implementations • 15 Nov 2023 • David Chanin, Anthony Hunter, Oana-Maria Camburu
Transformer language models (LMs) have been shown to represent concepts as directions in the latent space of hidden activations.
1 code implementation • 13 Nov 2023 • Xuanli He, Yuxiang Wu, Oana-Maria Camburu, Pasquale Minervini, Pontus Stenetorp
Recent studies demonstrated that large language models (LLMs) can excel in many tasks via in-context learning (ICL).
1 code implementation • 5 Jun 2023 • Myeongjun Jang, Bodhisattwa Prasad Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu
While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs.
1 code implementation • 29 May 2023 • Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein
Explanations of neural models aim to reveal a model's decision-making process for its predictions.
1 code implementation • 22 May 2023 • Jesus Solano, Mardhiyah Sanni, Oana-Maria Camburu, Pasquale Minervini
Models that generate natural language explanations (NLEs) for their predictions have recently gained increasing interest.
1 code implementation • 22 May 2023 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei
With recent advances, neural models can achieve human-level performance on various natural language tasks.
no code implementations • 11 Feb 2023 • Zhongbin Xie, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu
Bias-measuring datasets play a critical role in detecting biased behavior of language models and in evaluating progress of bias mitigation methods.
no code implementations • 15 Jan 2023 • Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz
One form of explanation for a prediction is an extractive rationale, i. e., a subset of features of an instance that lead the model to give its prediction on that instance.
1 code implementation • 9 Jul 2022 • Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons, Bartlomiej Papiez, Thomas Lukasiewicz
Most deep learning algorithms lack explanations for their predictions, which limits their deployment in clinical practice.
1 code implementation • 12 Dec 2021 • Yordan Yordanov, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu
A potential solution is the few-shot out-of-domain transfer of NLEs from a parent task with many NLEs to a child task.
no code implementations • 25 Jun 2021 • Bodhisattwa Prasad Majumder, Oana-Maria Camburu, Thomas Lukasiewicz, Julian McAuley
Our framework improves over previous methods by: (i) reaching SOTA task performance while also providing explanations, (ii) providing two types of explanations, while existing models usually provide only one type, and (iii) beating by a large margin the previous SOTA in terms of quality of both types of explanations.
2 code implementations • ICCV 2021 • Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, Thomas Lukasiewicz
e-ViL is a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks.
no code implementations • 16 Dec 2020 • Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz
We use an adversarial-based technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
1 code implementation • 3 Nov 2020 • Vid Kocijan, Oana-Maria Camburu, Thomas Lukasiewicz
For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies.
1 code implementation • EMNLP 2020 • Yordan Yordanov, Oana-Maria Camburu, Vid Kocijan, Thomas Lukasiewicz
Overall, four categories of training and evaluation objectives have been introduced.
no code implementations • 4 Oct 2020 • Oana-Maria Camburu
The first direction consists of feature-based post-hoc explanatory methods, that is, methods that aim to explain an already trained and fixed model (post-hoc), and that provide explanations in terms of input features, such as tokens for text and superpixels for images (feature-based).
1 code implementation • 23 Sep 2020 • Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom
For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions.
3 code implementations • 7 Apr 2020 • Virginie Do, Oana-Maria Camburu, Zeynep Akata, Thomas Lukasiewicz
The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning.
1 code implementation • ACL 2020 • Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom
To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.
2 code implementations • 4 Oct 2019 • Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom
We aim for this framework to provide a publicly available, off-the-shelf evaluation when the feature-selection perspective on explanations is needed.
1 code implementation • IJCNLP 2019 • Vid Kocijan, Oana-Maria Camburu, Ana-Maria Cretu, Yordan Yordanov, Phil Blunsom, Thomas Lukasiewicz
We use a language-model-based approach for pronoun resolution in combination with our WikiCREM dataset.
no code implementations • ACL 2019 • Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.
2 code implementations • 15 May 2019 • Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.
Ranked #13 on
Natural Language Inference
on WNLI
2 code implementations • NeurIPS 2018 • Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom
In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time.
Ranked #1 on
Natural Language Inference
on e-SNLI