Search Results for author: Nils Feldhus

Found 13 papers, 9 papers with code

Combining Open Domain Question Answering with a Task-Oriented Dialog System

no code implementations ACL (dialdoc) 2021 Jan Nehring, Nils Feldhus, Harleen Kaur, Akhyar Ahmed

The aim of this system is to combine the strength of an open-domain question answering system with the conversational power of task-oriented dialog systems.

Open-Domain Question Answering

LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools

1 code implementation23 Jan 2024 Qianli Wang, Tatiana Anikina, Nils Feldhus, Josef van Genabith, Leonhard Hennig, Sebastian Möller

Interpretability tools that offer explanations in the form of a dialogue have demonstrated their efficacy in enhancing users' understanding, as one-off explanations may occasionally fall short in providing sufficient information to the user.

counterfactual Fact Checking +4

InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations

1 code implementation9 Oct 2023 Nils Feldhus, Qianli Wang, Tatiana Anikina, Sahil Chopra, Cennet Oguz, Sebastian Möller

While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface.

Dialogue Act Classification Hate Speech Detection +1

Inseq: An Interpretability Toolkit for Sequence Generation Models

2 code implementations27 Feb 2023 Gabriele Sarti, Nils Feldhus, Ludwig Sickert, Oskar van der Wal, Malvina Nissim, Arianna Bisazza

Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools.

Feature Importance Machine Translation +2

Mediators: Conversational Agents Explaining NLP Model Behavior

no code implementations13 Jun 2022 Nils Feldhus, Ajay Madhavan Ravichandran, Sebastian Möller

The human-centric explainable artificial intelligence (HCXAI) community has raised the need for framing the explanation process as a conversation between human and machine.

Explainable artificial intelligence Position +1

Thermostat: A Large Collection of NLP Model Explanations and Analysis Tools

2 code implementations EMNLP (ACL) 2021 Nils Feldhus, Robert Schwarzenberg, Sebastian Möller

To facilitate research, we present Thermostat which consists of a large collection of model explanations and accompanying analysis tools.

Efficient Explanations from Empirical Explainers

2 code implementations EMNLP (BlackboxNLP) 2021 Robert Schwarzenberg, Nils Feldhus, Sebastian Möller

Amid a discussion about Green AI in which we see explainability neglected, we explore the possibility to efficiently approximate computationally expensive explainers.

Cannot find the paper you are looking for? You can Submit a new open access paper.