Search Results for author: Heike Adel

Found 50 papers, 17 papers with code

A Study on Entity Linking Across Domains”:" Which Data is Best for Fine-Tuning?

no code implementations RepL4NLP (ACL) 2022 Hassan Soliman, Heike Adel, Mohamed H. Gad-Elrab, Dragan Milchevski, Jannik Strötgen

In particular, we represent the entities of different KGs in a joint vector space and address the questions of which data is best suited for creating and fine-tuning that space, and whether fine-tuning harms performance on the general domain.

Entity Linking

Better Call SAUL: Fluent and Consistent Language Model Editing with Generation Regularization

no code implementations3 Oct 2024 Mingyang Wang, Lukas Lange, Heike Adel, Jannik Strötgen, Hinrich Schütze

Evaluations on three model editing benchmarks show that SAUL is a practical and reliable solution for model editing outperforming state-of-the-art methods while maintaining generation quality and reducing computational overhead.

Language Modelling Model Editing +1

Learning Rules from KGs Guided by Language Models

1 code implementation12 Sep 2024 Zihang Peng, Daria Stepanova, Vinh Thinh Ho, Heike Adel, Alessandra Russo, Simon Ott

In this work, our goal is to verify to which extent the exploitation of LMs is helpful for improving the quality of rule learning systems.

Knowledge Graphs

Learn it or Leave it: Module Composition and Pruning for Continual Learning

no code implementations26 Jun 2024 Mingyang Wang, Heike Adel, Lukas Lange, Jannik Strötgen, Hinrich Schütze

In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned.

Continual Learning Transfer Learning

FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering

2 code implementations29 Apr 2024 Wei Zhou, Mohsen Mesgar, Heike Adel, Annemarie Friedrich

To investigate these aspects, we create and publish a novel TQA evaluation benchmark in English.

Question Answering

Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings

no code implementations8 Mar 2024 Wei Zhou, Heike Adel, Hendrik Schuff, Ngoc Thang Vu

Attribution scores indicate the importance of different input parts and can, thus, explain model behaviour.

Decoder

GradSim: Gradient-Based Language Grouping for Effective Multilingual Training

no code implementations23 Oct 2023 Mingyang Wang, Heike Adel, Lukas Lange, Jannik Strötgen, Hinrich Schütze

However, not all languages positively influence each other and it is an open research question how to select the most suitable set of languages for multilingual training and avoid negative interference among languages whose characteristics or data distributions are not compatible.

Sentiment Analysis

Neighboring Words Affect Human Interpretation of Saliency Explanations

1 code implementation4 May 2023 Alon Jacovi, Hendrik Schuff, Heike Adel, Ngoc Thang Vu, Yoav Goldberg

Word-level saliency explanations ("heat maps over words") are often used to communicate feature-attribution in text-based models.

NLNDE at SemEval-2023 Task 12: Adaptive Pretraining and Source Language Selection for Low-Resource Multilingual Sentiment Analysis

no code implementations28 Apr 2023 Mingyang Wang, Heike Adel, Lukas Lange, Jannik Strötgen, Hinrich Schütze

In this work, we propose to leverage language-adaptive and task-adaptive pretraining on African texts and study transfer learning with source language selection on top of an African language-centric pretrained language model.

Language Modelling Sentiment Analysis +1

SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains

1 code implementation14 Feb 2023 Koustava Goswami, Lukas Lange, Jun Araki, Heike Adel

Prompting pre-trained language models leads to promising results across natural language processing tasks but is less effective when applied in low-resource domains, due to the domain gap between the pre-training data and the downstream task.

Language Modelling text-classification +1

Challenges in Explanation Quality Evaluation

no code implementations13 Oct 2022 Hendrik Schuff, Heike Adel, Peng Qi, Ngoc Thang Vu

This approach assumes that explanations which reach higher proxy scores will also provide a greater benefit to human users.

Question Answering

Multilingual Normalization of Temporal Expressions with Masked Language Models

1 code implementation20 May 2022 Lukas Lange, Jannik Strötgen, Heike Adel, Dietrich Klakow

The detection and normalization of temporal expressions is an important task and preprocessing step for many applications.

Language Modelling Masked Language Modeling

Human Interpretation of Saliency-based Explanation Over Text

1 code implementation27 Jan 2022 Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, Ngoc Thang Vu

In this work, we focus on this question through a study of saliency-based explanations over textual data.

CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain

1 code implementation16 Dec 2021 Lukas Lange, Heike Adel, Jannik Strötgen, Dietrich Klakow

The field of natural language processing (NLP) has recently seen a large change towards using pre-trained language models for solving almost any task.

Clinical Concept Extraction Sentence +1

Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings

1 code implementation EMNLP (BlackboxNLP) 2021 Hendrik Schuff, Hsiu-Yu Yang, Heike Adel, Ngoc Thang Vu

For this, we investigate different sources of external knowledge and evaluate the performance of our models on in-domain data as well as on special transfer datasets that are designed to assess fine-grained reasoning capabilities.

Natural Language Inference

Thought Flow Nets: From Single Predictions to Trains of Model Thought

no code implementations26 Jul 2021 Hendrik Schuff, Heike Adel, Ngoc Thang Vu

In addition, we conduct a qualitative analysis of thought flow correction patterns and explore how thought flow predictions affect human users within a crowdsourcing study.

Question Answering

Enriched Attention for Robust Relation Extraction

no code implementations22 Apr 2021 Heike Adel, Jannik Strötgen

The performance of relation extraction models has increased considerably with the rise of neural networks.

Relation Relation Extraction +1

To Share or not to Share: Predicting Sets of Sources for Model Transfer Learning

1 code implementation EMNLP 2021 Lukas Lange, Jannik Strötgen, Heike Adel, Dietrich Klakow

For this, we study the effects of model transfer on sequence labeling across various domains and tasks and show that our methods based on model similarity and support vector machines are able to predict promising sources, resulting in performance increases of up to 24 F1 points.

text similarity Transfer Learning

NLNDE at CANTEMIST: Neural Sequence Labeling and Parsing Approaches for Clinical Concept Extraction

no code implementations23 Oct 2020 Lukas Lange, Xiang Dai, Heike Adel, Jannik Strötgen

The recognition and normalization of clinical information, such as tumor morphology mentions, is an important, but complex process consisting of multiple subtasks.

Clinical Concept Extraction

FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations

1 code implementation EMNLP 2021 Lukas Lange, Heike Adel, Jannik Strötgen, Dietrich Klakow

Combining several embeddings typically improves performance in downstream tasks as different embeddings encode different information.

NER POS +4

An Analysis of Simple Data Augmentation for Named Entity Recognition

3 code implementations COLING 2020 Xiang Dai, Heike Adel

Simple yet effective data augmentation techniques have been proposed for sentence-level and sentence-pair natural language processing tasks.

Data Augmentation named-entity-recognition +3

F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering

1 code implementation EMNLP 2020 Hendrik Schuff, Heike Adel, Ngoc Thang Vu

The user study shows that our models increase the ability of the users to judge the correctness of the system and that scores like F1 are not enough to estimate the usefulness of a model in a practical setting with human users.

Model Selection Question Answering

NLNDE: The Neither-Language-Nor-Domain-Experts' Way of Spanish Medical Document De-Identification

no code implementations2 Jul 2020 Lukas Lange, Heike Adel, Jannik Strötgen

Natural language processing has huge potential in the medical domain which recently led to a lot of research in this field.

De-identification

On the Choice of Auxiliary Languages for Improved Sequence Tagging

no code implementations WS 2020 Lukas Lange, Heike Adel, Jannik Strötgen

Recent work showed that embeddings from related languages can improve the performance of sequence tagging, even for monolingual models.

Part-Of-Speech Tagging

Closing the Gap: Joint De-Identification and Concept Extraction in the Clinical Domain

1 code implementation ACL 2020 Lukas Lange, Heike Adel, Jannik Strötgen

Exploiting natural language processing in the clinical domain requires de-identification, i. e., anonymization of personal information in texts.

De-identification

Type-aware Convolutional Neural Networks for Slot Filling

no code implementations1 Oct 2019 Heike Adel, Hinrich Schütze

In particular, we explore different ways of integrating the named entity types of the relation arguments into a neural network for relation classification, including a joint training and a structured prediction approach.

coreference-resolution General Classification +6

Adversarial Training for Satire Detection: Controlling for Confounding Variables

no code implementations NAACL 2019 Robert McHardy, Heike Adel, Roman Klinger

We therefore propose a novel model for satire detection with an adversarial component to control for the confounding variable of publication source.

General Classification Knowledge Base Population +1

Neural Semi-Markov Conditional Random Fields for Robust Character-Based Part-of-Speech Tagging

no code implementations NAACL 2019 Apostolos Kemos, Heike Adel, Hinrich Schütze

Character-level models of tokens have been shown to be effective at dealing with within-token noise and out-of-vocabulary words.

Part-Of-Speech Tagging

Impact of Coreference Resolution on Slot Filling

no code implementations26 Oct 2017 Heike Adel, Hinrich Schütze

In this paper, we demonstrate the importance of coreference resolution for natural language processing on the example of the TAC Slot Filling shared task.

coreference-resolution slot-filling +1

Syntactic and Semantic Features For Code-Switching Factored Language Models

no code implementations4 Oct 2017 Heike Adel, Ngoc Thang Vu, Katrin Kirchhoff, Dominic Telaar, Tanja Schultz

The experimental results reveal that Brown word clusters, part-of-speech tags and open-class words are the most effective at reducing the perplexity of factored language models on the Mandarin-English Code-Switching corpus SEAME.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Corpus-level Fine-grained Entity Typing

no code implementations7 Aug 2017 Yadollah Yaghoobzadeh, Heike Adel, Hinrich Schütze

This paper addresses the problem of corpus-level entity typing, i. e., inferring from a large corpus that an entity is a member of a class such as "food" or "artist".

Entity Typing Knowledge Base Completion

Noise Mitigation for Neural Entity Typing and Relation Extraction

no code implementations EACL 2017 Yadollah Yaghoobzadeh, Heike Adel, Hinrich Schütze

For the second noise type, we propose ways to improve the integration of noisy entity type predictions into relation extraction.

Entity Typing Multi-Label Learning +3

Exploring Different Dimensions of Attention for Uncertainty Detection

no code implementations EACL 2017 Heike Adel, Hinrich Schütze

Neural networks with attention have proven effective for many natural language processing tasks.

Nonsymbolic Text Representation

no code implementations3 Oct 2016 Hinrich Schuetze, Heike Adel, Ehsaneddin Asgari

We introduce the first generic text representation model that is completely nonsymbolic, i. e., it does not require the availability of a segmentation or tokenization method that attempts to identify words or other symbolic units in text.

Denoising

Combining Recurrent and Convolutional Neural Networks for Relation Classification

no code implementations NAACL 2016 Ngoc Thang Vu, Heike Adel, Pankaj Gupta, Hinrich Schütze

This paper investigates two different neural architectures for the task of relation classification: convolutional neural networks and recurrent neural networks.

Classification General Classification +2

Comparing Convolutional Neural Networks to Traditional Models for Slot Filling

no code implementations NAACL 2016 Heike Adel, Benjamin Roth, Hinrich Schütze

We address relation classification in the context of slot filling, the task of finding and evaluating fillers like "Steve Jobs" for the slot X in "X founded Apple".

Classification General Classification +5

Cannot find the paper you are looking for? You can Submit a new open access paper.