Search Results for author: Mennatallah El-Assady

Found 43 papers, 7 papers with code

Reward Learning from Multiple Feedback Types

1 code implementation28 Feb 2025 Yannick Metz, András Geiszl, Raphaël Baur, Mennatallah El-Assady

Such diverse feedback can better support the goals of a human annotator, and the simultaneous use of multiple sources might be mutually informative for the learning process or carry type-dependent biases for the reward learning process.

Mapping out the Space of Human Feedback for Reinforcement Learning: A Conceptual Framework

no code implementations18 Nov 2024 Yannick Metz, David Lindner, Raphaël Baur, Mennatallah El-Assady

Based on the feedback taxonomy and quality criteria, we derive requirements and design choices for systems learning from human feedback.

Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities

no code implementations2 Oct 2024 Kenza Amara, Lukas Klein, Carsten Lüth, Paul Jäger, Hendrik Strobelt, Mennatallah El-Assady

Our work investigates how the integration of information from image and text modalities influences the performance and behavior of VLMs in visual question answering (VQA) and reasoning tasks.

Question Answering Visual Question Answering

Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics

1 code implementation25 Sep 2024 Lukas Klein, Carsten T. Lüth, Udo Schlegel, Till J. Bungert, Mennatallah El-Assady, Paul F. Jäger

Further, we comprehensively evaluate various XAI methods to assist practitioners in selecting appropriate methods aligning with their needs.

Selection bias

iNNspector: Visual, Interactive Deep Model Debugging

no code implementations25 Jul 2024 Thilo Spinner, Daniel Fürst, Mennatallah El-Assady

To operationalize our framework in a ready-to-use application, we (2) present the iNNspector system.

Deep Learning model

MelodyVis: Visual Analytics for Melodic Patterns in Sheet Music

no code implementations7 Jul 2024 Matthias Miller, Daniel Fürst, Maximilian T. Fischer, Hanna Hauptmann, Daniel Keim, Mennatallah El-Assady

Our study also confirms the usefulness of MelodyVis in supporting common analytical tasks in melodic analysis, with participants reporting improved pattern identification and interpretation.

On Affine Homotopy between Language Encoders

no code implementations4 Jun 2024 Robin SM Chan, Reda Boumasmoud, Anej Svete, Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Mennatallah El-Assady, Ryan Cotterell

In this spirit, we study the properties of \emph{affine} alignment of language encoders and its implications on extrinsic similarity.

Challenges and Opportunities in Text Generation Explainability

no code implementations14 May 2024 Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady

The NLP community has begun to take a keen interest in gaining a deeper understanding of text generation, leading to the development of model-agnostic explainable artificial intelligence (xAI) methods tailored to this task.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

Interactive Analysis of LLMs using Meaningful Counterfactuals

no code implementations23 Apr 2024 Furui Cheng, Vilém Zouhar, Robin Shing Moon Chan, Daniel Fürst, Hendrik Strobelt, Mennatallah El-Assady

First, the generated textual counterfactuals should be meaningful and readable to users and thus can be mentally compared to draw conclusions.

counterfactual

Deconstructing Human-AI Collaboration: Agency, Interaction, and Adaptation

no code implementations18 Apr 2024 Steffen Holter, Mennatallah El-Assady

As full AI-based automation remains out of reach in most real-world applications, the focus has instead shifted to leveraging the strengths of both human and AI agents, creating effective collaborative systems.

SyntaxShap: Syntax-aware Explainability Method for Text Generation

1 code implementation14 Feb 2024 Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady

We adopt a model-based evaluation to compare SyntaxShap and its weighted form to state-of-the-art explainability methods adapted to text generation tasks, using diverse metrics including faithfulness, coherency, and semantic alignment of the explanations to the model.

Text Generation

PowerGraph: A power grid benchmark dataset for graph neural networks

no code implementations5 Feb 2024 Anna Varbella, Kenza Amara, Blazhe Gjorgiev, Mennatallah El-Assady, Giovanni Sansavini

However, there is a lack of publicly available graph datasets for training and benchmarking ML models in electrical power grid applications.

Benchmarking Binary Classification +1

RELIC: Investigating Large Language Model Responses using Self-Consistency

no code implementations28 Nov 2023 Furui Cheng, Vilém Zouhar, Simran Arora, Mrinmaya Sachan, Hendrik Strobelt, Mennatallah El-Assady

To address this challenge, we propose an interactive system that helps users gain insight into the reliability of the generated text.

Language Modeling Language Modelling +1

A Diachronic Perspective on User Trust in AI under Uncertainty

1 code implementation20 Oct 2023 Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El-Assady, Mrinmaya Sachan

In a human-AI collaboration, users build a mental model of the AI system based on its reliability and how it presents its decision, e. g. its presentation of system confidence and an explanation of the output.

GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations

no code implementations28 Sep 2023 Kenza Amara, Mennatallah El-Assady, Rex Ying

Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.

Graph Neural Network

RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback

no code implementations8 Aug 2023 Yannick Metz, David Lindner, Raphaël Baur, Daniel Keim, Mennatallah El-Assady

To use reinforcement learning from human feedback (RLHF) in practical applications, it is crucial to learn reward models from diverse sources of human feedback and to consider human factors involved in providing feedback of different types.

Visual Explanations with Attributions and Counterfactuals on Time Series Classification

no code implementations14 Jul 2023 Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady

To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels.

Decision Making Explainable artificial intelligence +3

Which Spurious Correlations Impact Reasoning in NLI Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals

no code implementations21 Jun 2023 Robin Chan, Afra Amini, Mennatallah El-Assady

We present a human-in-the-loop dashboard tailored to diagnosing potential spurious features that NLI models rely on for predictions.

Logical Fallacies

Automatic Generation of Socratic Subquestions for Teaching Math Word Problems

1 code implementation23 Nov 2022 Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, Mrinmaya Sachan

On both automatic and human quality evaluations, we find that LMs constrained with desirable question properties generate superior questions and improve the overall performance of a math word problem solver.

Math Math Word Problem Solving +2

Visual Comparison of Language Model Adaptation

no code implementations17 Aug 2022 Rita Sevastjanova, Eren Cakmak, Shauli Ravfogel, Ryan Cotterell, Mennatallah El-Assady

The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces.

Language Modeling Language Modelling +1

Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback and Interaction in Reinforcement Learning

no code implementations27 Jun 2022 David Lindner, Mennatallah El-Assady

Reinforcement learning (RL) commonly assumes access to well-specified reward functions, which many practical applications do not provide.

Reinforcement Learning (RL)

CorpusVis: Visual Analysis of Digital Sheet Music Collections

no code implementations23 Mar 2022 Matthias Miller, Julius Rauscher, Daniel A. Keim, Mennatallah El-Assady

Manually investigating sheet music collections is challenging for music analysts due to the magnitude and complexity of underlying features, structures, and contextual information.

Rhythm

Explaining Contextualization in Language Models using Visual Analytics

no code implementations ACL 2021 Rita Sevastjanova, Aikaterini-Lida Kalouli, Christin Beck, Hanna Sch{\"a}fer, Mennatallah El-Assady

Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn.

XplaiNLI: Explainable Natural Language Inference through Visual Analytics

no code implementations COLING 2020 Aikaterini-Lida Kalouli, Rita Sevastjanova, Valeria de Paiva, Richard Crouch, Mennatallah El-Assady

Advances in Natural Language Inference (NLI) have helped us understand what state-of-the-art models really learn and what their generalization power is.

Natural Language Inference

A Comparative Analysis of Industry Human-AI Interaction Guidelines

no code implementations22 Oct 2020 Austin P. Wright, Zijie J. Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, Duen Horng Chau

We have then used this framework to compare each of the surveyed companies to find differences in areas of emphasis.

Human-Computer Interaction

Semantic Concept Spaces: Guided Topic Model Refinement using Word-Embedding Projections

no code implementations1 Aug 2019 Mennatallah El-Assady, Rebecca Kehlbeck, Christopher Collins, Daniel Keim, Oliver Deussen

We present a framework that allows users to incorporate the semantics of their domain knowledge for topic model refinement while remaining model-agnostic.

Decision Making

VIANA: Visual Interactive Annotation of Argumentation

no code implementations29 Jul 2019 Fabian Sperrle, Rita Sevastjanova, Rebecca Kehlbeck, Mennatallah El-Assady

The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.

Language Modeling Language Modelling

explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning

1 code implementation29 Jul 2019 Thilo Spinner, Udo Schlegel, Hanna Schäfer, Mennatallah El-Assady

We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.