Search Results for author: Mennatallah El-Assady

Found 33 papers, 4 papers with code

Deconstructing Human-AI Collaboration: Agency, Interaction, and Adaptation

no code implementations18 Apr 2024 Steffen Holter, Mennatallah El-Assady

As full AI-based automation remains out of reach in most real-world applications, the focus has instead shifted to leveraging the strengths of both human and AI agents, creating effective collaborative systems.

SyntaxShap: Syntax-aware Explainability Method for Text Generation

no code implementations14 Feb 2024 Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady

We adopt a model-based evaluation to compare SyntaxShap and its weighted form to state-of-the-art explainability methods adapted to text generation tasks, using diverse metrics including faithfulness, complexity, coherency, and semantic alignment of the explanations to the model.

Text Generation

RELIC: Investigating Large Language Model Responses using Self-Consistency

no code implementations28 Nov 2023 Furui Cheng, Vilém Zouhar, Simran Arora, Mrinmaya Sachan, Hendrik Strobelt, Mennatallah El-Assady

To address this challenge, we propose an interactive system that helps users gain insight into the reliability of the generated text.

Language Modelling Large Language Model

A Diachronic Perspective on User Trust in AI under Uncertainty

1 code implementation20 Oct 2023 Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El-Assady, Mrinmaya Sachan

In a human-AI collaboration, users build a mental model of the AI system based on its reliability and how it presents its decision, e. g. its presentation of system confidence and an explanation of the output.

GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations

no code implementations28 Sep 2023 Kenza Amara, Mennatallah El-Assady, Rex Ying

Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.

RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback

no code implementations8 Aug 2023 Yannick Metz, David Lindner, Raphaël Baur, Daniel Keim, Mennatallah El-Assady

To use reinforcement learning from human feedback (RLHF) in practical applications, it is crucial to learn reward models from diverse sources of human feedback and to consider human factors involved in providing feedback of different types.

Visual Explanations with Attributions and Counterfactuals on Time Series Classification

no code implementations14 Jul 2023 Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady

To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels.

Decision Making Explainable artificial intelligence +3

Which Spurious Correlations Impact Reasoning in NLI Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals

no code implementations21 Jun 2023 Robin Chan, Afra Amini, Mennatallah El-Assady

We present a human-in-the-loop dashboard tailored to diagnosing potential spurious features that NLI models rely on for predictions.

Logical Fallacies

Automatic Generation of Socratic Subquestions for Teaching Math Word Problems

1 code implementation23 Nov 2022 Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, Mrinmaya Sachan

On both automatic and human quality evaluations, we find that LMs constrained with desirable question properties generate superior questions and improve the overall performance of a math word problem solver.

Math Math Word Problem Solving +2

Visual Comparison of Language Model Adaptation

no code implementations17 Aug 2022 Rita Sevastjanova, Eren Cakmak, Shauli Ravfogel, Ryan Cotterell, Mennatallah El-Assady

The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces.

Language Modelling

Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback and Interaction in Reinforcement Learning

no code implementations27 Jun 2022 David Lindner, Mennatallah El-Assady

Reinforcement learning (RL) commonly assumes access to well-specified reward functions, which many practical applications do not provide.

Reinforcement Learning (RL)

CorpusVis: Visual Analysis of Digital Sheet Music Collections

no code implementations23 Mar 2022 Matthias Miller, Julius Rauscher, Daniel A. Keim, Mennatallah El-Assady

Manually investigating sheet music collections is challenging for music analysts due to the magnitude and complexity of underlying features, structures, and contextual information.

Explaining Contextualization in Language Models using Visual Analytics

no code implementations ACL 2021 Rita Sevastjanova, Aikaterini-Lida Kalouli, Christin Beck, Hanna Sch{\"a}fer, Mennatallah El-Assady

Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn.

XplaiNLI: Explainable Natural Language Inference through Visual Analytics

no code implementations COLING 2020 Aikaterini-Lida Kalouli, Rita Sevastjanova, Valeria de Paiva, Richard Crouch, Mennatallah El-Assady

Advances in Natural Language Inference (NLI) have helped us understand what state-of-the-art models really learn and what their generalization power is.

Natural Language Inference

A Comparative Analysis of Industry Human-AI Interaction Guidelines

no code implementations22 Oct 2020 Austin P. Wright, Zijie J. Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, Duen Horng Chau

We have then used this framework to compare each of the surveyed companies to find differences in areas of emphasis.

Human-Computer Interaction

Semantic Concept Spaces: Guided Topic Model Refinement using Word-Embedding Projections

no code implementations1 Aug 2019 Mennatallah El-Assady, Rebecca Kehlbeck, Christopher Collins, Daniel Keim, Oliver Deussen

We present a framework that allows users to incorporate the semantics of their domain knowledge for topic model refinement while remaining model-agnostic.

Decision Making

explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning

1 code implementation29 Jul 2019 Thilo Spinner, Udo Schlegel, Hanna Schäfer, Mennatallah El-Assady

We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

VIANA: Visual Interactive Annotation of Argumentation

no code implementations29 Jul 2019 Fabian Sperrle, Rita Sevastjanova, Rebecca Kehlbeck, Mennatallah El-Assady

The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.