no code implementations • COLING (LAW) 2020 • Christin Beck, Hannah Booth, Mennatallah El-Assady, Miriam Butt
The development of linguistic corpora is fraught with various problems of annotation and representation.
1 code implementation • 28 Feb 2025 • Yannick Metz, András Geiszl, Raphaël Baur, Mennatallah El-Assady
Such diverse feedback can better support the goals of a human annotator, and the simultaneous use of multiple sources might be mutually informative for the learning process or carry type-dependent biases for the reward learning process.
no code implementations • 18 Nov 2024 • Yannick Metz, David Lindner, Raphaël Baur, Mennatallah El-Assady
Based on the feedback taxonomy and quality criteria, we derive requirements and design choices for systems learning from human feedback.
no code implementations • 2 Oct 2024 • Kenza Amara, Lukas Klein, Carsten Lüth, Paul Jäger, Hendrik Strobelt, Mennatallah El-Assady
Our work investigates how the integration of information from image and text modalities influences the performance and behavior of VLMs in visual question answering (VQA) and reasoning tasks.
1 code implementation • 25 Sep 2024 • Lukas Klein, Carsten T. Lüth, Udo Schlegel, Till J. Bungert, Mennatallah El-Assady, Paul F. Jäger
Further, we comprehensively evaluate various XAI methods to assist practitioners in selecting appropriate methods aligning with their needs.
no code implementations • 25 Jul 2024 • Thilo Spinner, Daniel Fürst, Mennatallah El-Assady
To operationalize our framework in a ready-to-use application, we (2) present the iNNspector system.
no code implementations • 7 Jul 2024 • Matthias Miller, Daniel Fürst, Maximilian T. Fischer, Hanna Hauptmann, Daniel Keim, Mennatallah El-Assady
Our study also confirms the usefulness of MelodyVis in supporting common analytical tasks in melodic analysis, with participants reporting improved pattern identification and interpretation.
no code implementations • 4 Jun 2024 • Robin SM Chan, Reda Boumasmoud, Anej Svete, Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Mennatallah El-Assady, Ryan Cotterell
In this spirit, we study the properties of \emph{affine} alignment of language encoders and its implications on extrinsic similarity.
no code implementations • 14 May 2024 • Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady
The NLP community has begun to take a keen interest in gaining a deeper understanding of text generation, leading to the development of model-agnostic explainable artificial intelligence (xAI) methods tailored to this task.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+1
no code implementations • 23 Apr 2024 • Furui Cheng, Vilém Zouhar, Robin Shing Moon Chan, Daniel Fürst, Hendrik Strobelt, Mennatallah El-Assady
First, the generated textual counterfactuals should be meaningful and readable to users and thus can be mentally compared to draw conclusions.
no code implementations • 18 Apr 2024 • Steffen Holter, Mennatallah El-Assady
As full AI-based automation remains out of reach in most real-world applications, the focus has instead shifted to leveraging the strengths of both human and AI agents, creating effective collaborative systems.
no code implementations • 12 Mar 2024 • Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias Stähle, Daniel A. Keim, Oliver Deussen, Mennatallah El-Assady
Large language models (LLMs) are widely deployed in various downstream tasks, e. g., auto-completion, aided writing, or chat-based text generation.
1 code implementation • 14 Feb 2024 • Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady
We adopt a model-based evaluation to compare SyntaxShap and its weighted form to state-of-the-art explainability methods adapted to text generation tasks, using diverse metrics including faithfulness, coherency, and semantic alignment of the explanations to the model.
no code implementations • 5 Feb 2024 • Anna Varbella, Kenza Amara, Blazhe Gjorgiev, Mennatallah El-Assady, Giovanni Sansavini
However, there is a lack of publicly available graph datasets for training and benchmarking ML models in electrical power grid applications.
no code implementations • 28 Nov 2023 • Furui Cheng, Vilém Zouhar, Simran Arora, Mrinmaya Sachan, Hendrik Strobelt, Mennatallah El-Assady
To address this challenge, we propose an interactive system that helps users gain insight into the reliability of the generated text.
1 code implementation • 20 Oct 2023 • Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El-Assady, Mrinmaya Sachan
In a human-AI collaboration, users build a mental model of the AI system based on its reliability and how it presents its decision, e. g. its presentation of system confidence and an explanation of the output.
no code implementations • 17 Oct 2023 • Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias Stähle, Daniel A. Keim, Oliver Deussen, Andreas Spitz, Mennatallah El-Assady
We quantitatively show the value of exposing the beam search tree and present five detailed analysis scenarios addressing the identified challenges.
no code implementations • 28 Sep 2023 • Kenza Amara, Mennatallah El-Assady, Rex Ying
Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.
no code implementations • 8 Aug 2023 • Yannick Metz, David Lindner, Raphaël Baur, Daniel Keim, Mennatallah El-Assady
To use reinforcement learning from human feedback (RLHF) in practical applications, it is crucial to learn reward models from diverse sources of human feedback and to consider human factors involved in providing feedback of different types.
no code implementations • 14 Jul 2023 • Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady
To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels.
no code implementations • 21 Jun 2023 • Robin Chan, Afra Amini, Mennatallah El-Assady
We present a human-in-the-loop dashboard tailored to diagnosing potential spurious features that NLI models rely on for predictions.
1 code implementation • 15 Jun 2023 • Lukas Klein, João B. S. Carvalho, Mennatallah El-Assady, Paolo Penna, Joachim M. Buhmann, Paul F. Jaeger
We propose a framework that utilizes interpretable disentangled representations for downstream-task prediction.
1 code implementation • 23 Nov 2022 • Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, Mrinmaya Sachan
On both automatic and human quality evaluations, we find that LMs constrained with desirable question properties generate superior questions and improve the overall performance of a math word problem solver.
no code implementations • 7 Oct 2022 • Eugene Bykovets, Yannick Metz, Mennatallah El-Assady, Daniel A. Keim, Joachim M. Buhmann
To overcome this, we formulate a Pareto optimization problem in which we simultaneously optimize for reward and OOD detection performance.
Deep Reinforcement Learning
Out of Distribution (OOD) Detection
+1
no code implementations • 22 Sep 2022 • Pantea Haghighatkhah, Mennatallah El-Assady, Jean-Daniel Fekete, Narges Mahyar, Carita Paradis, Vasiliki Simaki, Bettina Speckmann
Current visual text analysis approaches rely on sophisticated processing pipelines.
no code implementations • 22 Aug 2022 • Eugene Bykovets, Yannick Metz, Mennatallah El-Assady, Daniel A. Keim, Joachim M. Buhmann
Robustness to adversarial perturbations has been explored in many areas of computer vision.
no code implementations • 17 Aug 2022 • Rita Sevastjanova, Eren Cakmak, Shauli Ravfogel, Ryan Cotterell, Mennatallah El-Assady
The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces.
no code implementations • 14 Jul 2022 • Rita Sevastjanova, Mennatallah El-Assady
Language models learn and represent language differently than humans; they learn the form and not the meaning.
no code implementations • 11 Jul 2022 • Lukas Klein, Mennatallah El-Assady, Paul F. Jäger
Based on these similarities, we present a formalization of IML along the lines of a statistical process.
BIG-bench Machine Learning
Explainable Artificial Intelligence (XAI)
+1
no code implementations • 27 Jun 2022 • David Lindner, Mennatallah El-Assady
Reinforcement learning (RL) commonly assumes access to well-specified reward functions, which many practical applications do not provide.
no code implementations • 23 Mar 2022 • Matthias Miller, Julius Rauscher, Daniel A. Keim, Mennatallah El-Assady
Manually investigating sheet music collections is challenging for music analysts due to the magnitude and complexity of underlying features, structures, and contextual information.
no code implementations • ACL 2021 • Rita Sevastjanova, Aikaterini-Lida Kalouli, Christin Beck, Hanna Sch{\"a}fer, Mennatallah El-Assady
Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn.
no code implementations • ICLR Workshop Rethinking_ML_Papers 2021 • Beatrice Gobbo, Mennatallah El-Assady
The need for innovating scientific publications is felt across various research fields.
no code implementations • 8 Dec 2020 • Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques.
BIG-bench Machine Learning
Explainable Artificial Intelligence (XAI)
+2
no code implementations • COLING 2020 • Aikaterini-Lida Kalouli, Rita Sevastjanova, Valeria de Paiva, Richard Crouch, Mennatallah El-Assady
Advances in Natural Language Inference (NLI) have helped us understand what state-of-the-art models really learn and what their generalization power is.
no code implementations • 22 Oct 2020 • Austin P. Wright, Zijie J. Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, Duen Horng Chau
We have then used this framework to compare each of the surveyed companies to find differences in areas of emphasis.
Human-Computer Interaction
no code implementations • 14 Sep 2020 • Fabian Sperrle, Mennatallah El-Assady, Grace Guo, Duen Horng Chau, Alex Endert, Daniel Keim
This paper systematically derives design dimensions for the structured evaluation of explainable artificial intelligence (XAI) approaches.
no code implementations • 16 Sep 2019 • Udo Schlegel, Hiba Arnout, Mennatallah El-Assady, Daniela Oelke, Daniel A. Keim
In this work, we apply XAI methods previously used in the image and text-domain on time series.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+2
no code implementations • 1 Aug 2019 • Mennatallah El-Assady, Rebecca Kehlbeck, Christopher Collins, Daniel Keim, Oliver Deussen
We present a framework that allows users to incorporate the semantics of their domain knowledge for topic model refinement while remaining model-agnostic.
no code implementations • 29 Jul 2019 • Fabian Sperrle, Rita Sevastjanova, Rebecca Kehlbeck, Mennatallah El-Assady
The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
1 code implementation • 29 Jul 2019 • Thilo Spinner, Udo Schlegel, Hanna Schäfer, Mennatallah El-Assady
We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models.
BIG-bench Machine Learning
Explainable Artificial Intelligence (XAI)
no code implementations • ACL 2019 • Mennatallah El-Assady, Wolfgang Jentner, Fabian Sperrle, Rita Sevastjanova, Annette Hautli-Janisz, Miriam Butt, Daniel Keim
We present a modular framework for the rapid-prototyping of linguistic, web-based, visual analytics applications.