Search Results for author: Nils Feldhus

Found 16 papers, 10 papers with code

Combining Open Domain Question Answering with a Task-Oriented Dialog System

no code implementations ACL (dialdoc) 2021 Jan Nehring, Nils Feldhus, Harleen Kaur, Akhyar Ahmed

The aim of this system is to combine the strength of an open-domain question answering system with the conversational power of task-oriented dialog systems.

Open-Domain Question Answering

Free-text Rationale Generation under Readability Level Control

no code implementations1 Jul 2024 Yi-Sheng Hsu, Nils Feldhus, Sherzod Hakimov

Free-text rationales justify model decisions in natural language and thus become likable and accessible among approaches to explanation across many tasks.

Hallucination Text Generation

CoXQL: A Dataset for Parsing Explanation Requests in Conversational XAI Systems

1 code implementation12 Jun 2024 Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Sebastian Möller

Conversational explainable artificial intelligence (ConvXAI) systems based on large language models (LLMs) have garnered significant interest from the research community in natural language processing (NLP) and human-computer interaction (HCI).

Decision Making Explainable artificial intelligence +1

LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations

1 code implementation23 Jan 2024 Qianli Wang, Tatiana Anikina, Nils Feldhus, Josef van Genabith, Leonhard Hennig, Sebastian Möller

Interpretability tools that offer explanations in the form of a dialogue have demonstrated their efficacy in enhancing users' understanding (Slack et al., 2023; Shen et al., 2023), as one-off explanations may fall short in providing sufficient information to the user.

counterfactual Fact Checking +4

InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations

1 code implementation9 Oct 2023 Nils Feldhus, Qianli Wang, Tatiana Anikina, Sahil Chopra, Cennet Oguz, Sebastian Möller

While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface.

Dialogue Act Classification Hate Speech Detection +1

Inseq: An Interpretability Toolkit for Sequence Generation Models

2 code implementations27 Feb 2023 Gabriele Sarti, Nils Feldhus, Ludwig Sickert, Oskar van der Wal, Malvina Nissim, Arianna Bisazza

Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools.

Decoder Feature Importance +3

Mediators: Conversational Agents Explaining NLP Model Behavior

no code implementations13 Jun 2022 Nils Feldhus, Ajay Madhavan Ravichandran, Sebastian Möller

The human-centric explainable artificial intelligence (HCXAI) community has raised the need for framing the explanation process as a conversation between human and machine.

Explainable artificial intelligence Position +1

Thermostat: A Large Collection of NLP Model Explanations and Analysis Tools

2 code implementations EMNLP (ACL) 2021 Nils Feldhus, Robert Schwarzenberg, Sebastian Möller

To facilitate research, we present Thermostat which consists of a large collection of model explanations and accompanying analysis tools.

Efficient Explanations from Empirical Explainers

2 code implementations EMNLP (BlackboxNLP) 2021 Robert Schwarzenberg, Nils Feldhus, Sebastian Möller

Amid a discussion about Green AI in which we see explainability neglected, we explore the possibility to efficiently approximate computationally expensive explainers.

Cannot find the paper you are looking for? You can Submit a new open access paper.