Search Results for author: Iryna Gurevych

Found 329 papers, 174 papers with code

Lexical-semantic resources: yet powerful resources for automatic personality classification

no code implementations GWC 2018 Xuan-Son Vu, Lucie Flekova, Lili Jiang, Iryna Gurevych

In this paper, we aim to reveal the impact of lexical-semantic resources, used in particular for word sense disambiguation and sense-level semantic categorization, on automatic personality classification task.

Classification General Classification +1

Argumentation Mining in User-Generated Web Discourse

no code implementations CL 2017 Ivan Habernal, Iryna Gurevych

The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation.

Parsing Argumentation Structures in Persuasive Essays

no code implementations CL 2017 Christian Stab, Iryna Gurevych

In this article, we present a novel approach for parsing argumentation structures.

Large-scale Multi-label Text Classification - Revisiting Neural Networks

no code implementations19 Dec 2013 Jinseok Nam, Jungi Kim, Eneldo Loza Mencía, Iryna Gurevych, Johannes Fürnkranz

Neural networks have recently been proposed for multi-label classification because they are able to capture and model label dependencies in the output layer.

General Classification Multi-Label Classification +3

Corpus-Driven Thematic Hierarchy Induction

no code implementations CONLL 2018 Ilia Kuznetsov, Iryna Gurevych

Thematic role hierarchy is a widely used linguistic tool to describe interactions between semantic roles and their syntactic realizations.

Machine Translation Question Answering +1

Event Time Extraction with a Decision Tree of Neural Classifiers

no code implementations TACL 2018 Nils Reimers, Nazanin Dehghani, Iryna Gurevych

We use this tree to incrementally infer, in a stepwise manner, at which time frame an event happened.

Generating Training Data for Semantic Role Labeling based on Label Transfer from Linked Lexical Resources

no code implementations TACL 2016 Silvana Hartmann, Judith Eckle-Kohler, Iryna Gurevych

We present a new approach for generating role-labeled training data using Linked Lexical Resources, i. e., integrated lexical resources that combine several resources (e. g., Word-Net, FrameNet, Wiktionary) by linking them on the sense or on the role level.

General Classification Machine Translation +5

Integrating Deep Linguistic Features in Factuality Prediction over Unified Datasets

no code implementations ACL 2017 Gabriel Stanovsky, Judith Eckle-Kohler, Yevgeniy Puzikov, Ido Dagan, Iryna Gurevych

Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results.

Knowledge Base Population Question Answering +1

Out-of-domain FrameNet Semantic Role Labeling

no code implementations EACL 2017 Silvana Hartmann, Ilia Kuznetsov, Teresa Martin, Iryna Gurevych

We create a novel test set for FrameNet SRL based on user-generated web text and find that the major bottleneck for out-of-domain FrameNet SRL is the frame identification step.

Semantic Role Labeling

Metaheuristic Approaches to Lexical Substitution and Simplification

no code implementations EACL 2017 Sallam Abualhaija, Tristan Miller, Judith Eckle-Kohler, Iryna Gurevych, Karl-Heinz Zimmermann

In this paper, we propose using metaheuristics{---}in particular, simulated annealing and the new D-Bees algorithm{---}to solve word sense disambiguation as an optimization problem within a knowledge-based lexical substitution system.

Lexical Simplification Machine Translation +4

A tool for extracting sense-disambiguated example sentences through user feedback

no code implementations EACL 2017 Beto Boullosa, Richard Eckart de Castilho, Alex Geyken, er, Lothar Lemnitzer, Iryna Gurevych

This paper describes an application system aimed to help lexicographers in the extraction of example sentences for a given headword based on its different senses.

Clustering General Classification

Objective Function Learning to Match Human Judgements for Optimization-Based Summarization

no code implementations NAACL 2018 Maxime Peyrard, Iryna Gurevych

Supervised summarization systems usually rely on supervision at the sentence or n-gram level provided by automatic metrics like ROUGE, which act as noisy proxies for human judgments.

Sentence

A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning

no code implementations SEMEVAL 2018 Hatem Mousselly-Sergieh, Teresa Botschen, Iryna Gurevych, Stefan Roth

Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities.

Graph Representation Learning Information Retrieval +3

SemEval-2017 Task 7: Detection and Interpretation of English Puns

no code implementations SEMEVAL 2017 Tristan Miller, Christian Hempelmann, Iryna Gurevych

A pun is a form of wordplay in which a word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another word, for an intended humorous or rhetorical effect.

Word Sense Disambiguation

GraphDocExplore: A Framework for the Experimental Comparison of Graph-based Document Exploration Techniques

no code implementations EMNLP 2017 Tobias Falke, Iryna Gurevych

Many techniques to automatically extract different types of graphs, showing for example entities or concepts and different relationships between them, have been suggested.

Navigate

BinLin: A Simple Method of Dependency Tree Linearization

no code implementations WS 2018 Yevgeniy Puzikov, Iryna Gurevych

Surface Realization Shared Task 2018 is a workshop on generating sentences from lemmatized sets of dependency triples.

Text Generation

One Size Fits All? A simple LSTM for non-literal token and construction-level classification

no code implementations COLING 2018 Erik-L{\^a}n Do Dinh, Steffen Eger, Iryna Gurevych

In this paper, we tackle four different tasks of non-literal language classification: token and construction level metaphor detection, classification of idiomatic use of infinitive-verb compounds, and classification of non-literal particle verbs.

Classification General Classification +1

Prediction of Frame-to-Frame Relations in the FrameNet Hierarchy with Frame Embeddings

no code implementations WS 2017 Teresa Botschen, Hatem Mousselly-Sergieh, Iryna Gurevych

Automatic completion of frame-to-frame (F2F) relations in the FrameNet (FN) hierarchy has received little attention, although they incorporate meta-level commonsense knowledge and are used in downstream approaches.

Natural Language Inference Representation Learning +1

Modeling Extractive Sentence Intersection via Subtree Entailment

no code implementations COLING 2016 Omer Levy, Ido Dagan, Gabriel Stanovsky, Judith Eckle-Kohler, Iryna Gurevych

Sentence intersection captures the semantic overlap of two texts, generalizing over paradigms such as textual entailment and semantic text similarity.

Abstractive Text Summarization Natural Language Inference +2

Real-Time News Summarization with Adaptation to Media Attention

no code implementations RANLP 2017 Andreas R{\"u}ckl{\'e}, Iryna Gurevych

In particular, at times with high media attention, our approach exploits the redundancy in content to produce a more precise summary and avoid emitting redundant information.

Decision Making News Summarization

Predicting the Difficulty of Language Proficiency Tests

no code implementations TACL 2014 Lisa Beinborn, Torsten Zesch, Iryna Gurevych

Language proficiency tests are used to evaluate and compare the progress of language learners.

WordNet---Wikipedia---Wiktionary: Construction of a Three-way Alignment

no code implementations LREC 2014 Tristan Miller, Iryna Gurevych

The coverage and quality of conceptual information contained in lexical semantic resources is crucial for many tasks in natural language processing.

Machine Translation Question Answering +1

Pitfalls in the Evaluation of Sentence Embeddings

no code implementations WS 2019 Steffen Eger, Andreas Rücklé, Iryna Gurevych

Our motivation is to challenge the current evaluation of sentence embeddings and to provide an easy-to-access reference for future research.

Sentence Sentence Embeddings

Combining Semantic Annotation of Word Sense \& Semantic Roles: A Novel Annotation Scheme for VerbNet Roles on German Language Data

no code implementations LREC 2016 {\'E}va M{\'u}jdricza-Maydt, Silvana Hartmann, Iryna Gurevych, Anette Frank

We present a VerbNet-based annotation scheme for semantic roles that we explore in an annotation study on German language data that combines word sense and semantic role annotation.

FAMULUS: Interactive Annotation and Feedback Generation for Teaching Diagnostic Reasoning

no code implementations IJCNLP 2019 Jonas Pfeiffer, Christian M. Meyer, Claudia Schulz, Jan Kiesewetter, Jan Zottmann, Michael Sailer, Elisabeth Bauer, Frank Fischer, Martin R. Fischer, Iryna Gurevych

Our proposed system FAMULUS helps students learn to diagnose based on automatic feedback in virtual patient simulations, and it supports instructors in labeling training data.

Multiple-choice

What do Deep Networks Like to Read?

no code implementations10 Sep 2019 Jonas Pfeiffer, Aishwarya Kamath, Iryna Gurevych, Sebastian Ruder

Recent research towards understanding neural networks probes models in a top-down manner, but is only able to identify model tendencies that are known a priori.

Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings

no code implementations14 Sep 2019 Shweta Mahajan, Teresa Botschen, Iryna Gurevych, Stefan Roth

One of the key challenges in learning joint embeddings of multiple modalities, e. g. of images and text, is to ensure coherent cross-modal semantics that generalize across datasets.

Cross-Modal Retrieval Retrieval

Improving Generalization by Incorporating Coverage in Natural Language Inference

no code implementations19 Sep 2019 Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych

Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.

Natural Language Inference Relation

When is ACL's Deadline? A Scientific Conversational Agent

no code implementations23 Nov 2019 Mohsen Mesgar, Paul Youssef, Lin Li, Dominik Bierwirth, Yihao Li, Christian M. Meyer, Iryna Gurevych

Our conversational agent UKP-ATHENA assists NLP researchers in finding and exploring scientific literature, identifying relevant authors, planning or post-processing conference visits, and preparing paper submissions using a unified interface based on natural language inputs and responses.

Revisiting the Binary Linearization Technique for Surface Realization

no code implementations WS 2019 Yevgeniy Puzikov, Claire Gardent, Ido Dagan, Iryna Gurevych

End-to-end neural approaches have achieved state-of-the-art performance in many natural language processing (NLP) tasks.

Decision Making

Two Birds with One Stone: Investigating Invertible Neural Networks for Inverse Problems in Morphology

no code implementations11 Dec 2019 Gözde Gül Şahin, Iryna Gurevych

We show that they are able to recover the morphological input parameters, i. e., predicting the lemma (e. g., cat) or the morphological tags (e. g., Plural) when run in the reverse direction, without any significant performance drop in the forward direction, i. e., predicting the surface form (e. g., cats).

LEMMA

Analyzing Structures in the Semantic Vector Space: A Framework for Decomposing Word Embeddings

1 code implementation17 Dec 2019 Andreas Hanselowski, Iryna Gurevych

Word embeddings are rich word representations, which in combination with deep neural networks, lead to large performance gains for many NLP tasks.

Word Embeddings

Metaphoric Paraphrase Generation

no code implementations28 Feb 2020 Kevin Stowe, Leonardo Ribeiro, Iryna Gurevych

This work describes the task of metaphoric paraphrase generation, in which we are given a literal sentence and are charged with generating a metaphoric paraphrase.

Paraphrase Generation Sentence

PuzzLing Machines: A Challenge on Learning From Small Data

no code implementations ACL 2020 Gözde Gül Şahin, Yova Kementchedjhieva, Phillip Rust, Iryna Gurevych

To expose this problem in a new light, we introduce a challenge on learning from small data, PuzzLing Machines, which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students.

Small Data Image Classification

A Matter of Framing: The Impact of Linguistic Formalism on Probing Results

no code implementations EMNLP 2020 Ilia Kuznetsov, Iryna Gurevych

Deep pre-trained contextualized encoders like BERT (Delvin et al., 2019) demonstrate remarkable performance on a range of downstream tasks.

Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic Conditional Random Fields

no code implementations1 May 2020 Jonas Pfeiffer, Edwin Simpson, Iryna Gurevych

We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.

Multi-Task Learning Sentence

Improving Factual Consistency Between a Response and Persona Facts

no code implementations EACL 2021 Mohsen Mesgar, Edwin Simpson, Iryna Gurevych

Neural models for response generation produce responses that are semantically plausible but not necessarily factually consistent with facts describing the speaker's persona.

reinforcement-learning Reinforcement Learning (RL) +1

Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures

no code implementations23 Oct 2020 Nafise Sadat Moosavi, Marcel de Boer, Prasetya Ajie Utama, Iryna Gurevych

Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective so that models learn less from biased examples.

Data Augmentation Sentence

Ranking Creative Language Characteristics in Small Data Scenarios

no code implementations23 Oct 2020 Julia Siekiera, Marius Köppel, Edwin Simpson, Kevin Stowe, Iryna Gurevych, Stefan Kramer

We therefore adapt the DirectRanker to provide a new deep model for ranking creative language with small data.

The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes

no code implementations ACL 2021 Nils Reimers, Iryna Gurevych

Information Retrieval using dense low-dimensional representations recently became popular and showed out-performance to traditional sparse-representations like BM25.

Information Retrieval Retrieval

Empirical Evaluation of Supervision Signals for Style Transfer Models

no code implementations15 Jan 2021 Yevgeniy Puzikov, Simoes Stanley, Iryna Gurevych, Immanuel Schweizer

In this work we empirically compare the dominant optimization paradigms which provide supervision signals during training: backtranslation, adversarial training and reinforcement learning.

Machine Translation reinforcement-learning +4

Focusing Knowledge-based Graph Argument Mining via Topic Modeling

no code implementations3 Feb 2021 Patrick Abels, Zahra Ahmadi, Sophie Burkhardt, Benjamin Schiller, Iryna Gurevych, Stefan Kramer

We use a topic model to extract topic- and sentence-specific evidence from the structured knowledge base Wikidata, building a graph based on the cosine similarity between the entity word vectors of Wikidata and the vector of the given sentence.

Argument Mining Decision Making +3

Scientia Potentia Est -- On the Role of Knowledge in Computational Argumentation

no code implementations1 Jul 2021 Anne Lauscher, Henning Wachsmuth, Iryna Gurevych, Goran Glavaš

Despite extensive research efforts in recent years, computational argumentation (CA) remains one of the most challenging areas of natural language processing.

Common Sense Reasoning Natural Language Understanding

Assisting Decision Making in Scholarly Peer Review: A Preference Learning Perspective

no code implementations2 Sep 2021 Nils Dycke, Edwin Simpson, Ilia Kuznetsov, Iryna Gurevych

Peer review is the primary means of quality control in academia; as an outcome of a peer review process, program and area chairs make acceptance decisions for each paper based on the review reports and scores they received.

Decision Making Fairness

TxT: Crossmodal End-to-End Learning with Transformers

no code implementations9 Sep 2021 Jan-Martin O. Steitz, Jonas Pfeiffer, Iryna Gurevych, Stefan Roth

Reasoning over multiple modalities, e. g. in Visual Question Answering (VQA), requires an alignment of semantic concepts across domains.

Multimodal Reasoning Question Answering +1

Diversity Over Size: On the Effect of Sample and Topic Sizes for Argument Mining Datasets

no code implementations23 May 2022 Benjamin Schiller, Johannes Daxenberger, Iryna Gurevych

The task of Argument Mining, that is extracting argumentative sentences for a specific topic from large document sources, is an inherently difficult task for machine learning models and humans alike, as large Argument Mining datasets are rare and recognition of argumentative sentences requires expert knowledge.

Argument Mining Benchmarking +1

Evaluating Coreference Resolvers on Community-based Question Answering: From Rule-based to State of the Art

1 code implementation COLING (CRAC) 2022 Haixia Chai, Nafise Sadat Moosavi, Iryna Gurevych, Michael Strube

The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models.

Answer Selection coreference-resolution +1

One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks

no code implementations12 Oct 2022 Gregor Geigle, Chen Cecilia Liu, Jonas Pfeiffer, Iryna Gurevych

While many VEs -- of different architectures, trained on different data and objectives -- are publicly available, they are not designed for the downstream V+L tasks.

The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification

no code implementations12 Oct 2022 Mohsen Mesgar, Thy Thy Tran, Goran Glavas, Iryna Gurevych

First, the unexplored combination of the cross-encoder architecture (with parameterized similarity scoring function) and episodic meta-learning consistently yields the best FSIC performance.

intent-classification Intent Classification +1

An Inclusive Notion of Text

no code implementations10 Nov 2022 Ilia Kuznetsov, Iryna Gurevych

Natural language processing (NLP) researchers develop models of grammar, meaning and communication based on written text.

GDPR Compliant Collection of Therapist-Patient-Dialogues

no code implementations22 Nov 2022 Tobias Mayer, Neha Warikoo, Oliver Grimm, Andreas Reif, Iryna Gurevych

While these conversations are part of the daily routine of clinicians, gathering them is usually hindered by various ethical (purpose of data usage), legal (data privacy) and technical (data formatting) limitations.

NLP meets psychotherapy: Using predicted client emotions and self-reported client emotions to measure emotional coherence

no code implementations22 Nov 2022 Neha Warikoo, Tobias Mayer, Dana Atzil-Slonim, Amir Eliassaf, Shira Haimovitz, Iryna Gurevych

No study has examined EC between the subjective experience of emotions and emotion expression in therapy or whether this coherence is associated with clients' well being.

Emotion Recognition

FUN with Fisher: Improving Generalization of Adapter-Based Cross-lingual Transfer with Scheduled Unfreezing

1 code implementation13 Jan 2023 Chen Cecilia Liu, Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych

Our experiments reveal that scheduled unfreezing induces different learning dynamics compared to standard fine-tuning, and provide evidence that the dynamics of Fisher Information during training correlate with cross-lingual generalization performance.

Cross-Lingual Transfer Transfer Learning

Romanization-based Large-scale Adaptation of Multilingual Language Models

no code implementations18 Apr 2023 Sukannya Purkayastha, Sebastian Ruder, Jonas Pfeiffer, Iryna Gurevych, Ivan Vulić

In order to boost the capacity of mPLMs to deal with low-resource and unseen languages, we explore the potential of leveraging transliteration on a massive scale.

Cross-Lingual Transfer Transliteration

A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and Why?

no code implementations22 May 2023 Aniket Pramanick, Yufang Hou, Saif M. Mohammad, Iryna Gurevych

In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.

Causal Discovery

Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research

no code implementations29 Jun 2023 Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge

Many recent improvements in NLP stem from the development and use of large pre-trained language models (PLMs) with billions of parameters.

Analyzing Dataset Annotation Quality Management in the Wild

2 code implementations16 Jul 2023 Jan-Christoph Klie, Richard Eckart de Castilho, Iryna Gurevych

A majority of the annotated publications apply good or excellent quality management.

Management

Model Merging by Uncertainty-Based Gradient Matching

no code implementations19 Oct 2023 Nico Daheim, Thomas Möllenhoff, Edoardo Maria Ponti, Iryna Gurevych, Mohammad Emtiyaz Khan

Models trained on different datasets can be merged by a weighted-averaging of their parameters, but why does it work and when can it fail?

A Survey of Confidence Estimation and Calibration in Large Language Models

no code implementations14 Nov 2023 Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, Iryna Gurevych

Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations.

Language Modelling

Document Structure in Long Document Transformers

no code implementations31 Jan 2024 Jan Buchmann, Max Eichler, Jan-Micha Bodensohn, Ilia Kuznetsov, Iryna Gurevych

Long documents often exhibit structure with hierarchically organized elements of different functions, such as section headers and paragraphs.

Dive into the Chasm: Probing the Gap between In- and Cross-Topic Generalization

1 code implementation2 Feb 2024 Andreas Waldis, Yufang Hou, Iryna Gurevych

Pre-trained language models (LMs) perform well in In-Topic setups, where training and testing data come from the same topics.

Zero-shot Sentiment Analysis in Low-Resource Languages Using a Multilingual Sentiment Lexicon

no code implementations3 Feb 2024 Fajri Koto, Tilman Beck, Zeerak Talat, Iryna Gurevych, Timothy Baldwin

Improving multilingual language models capabilities in low-resource languages is generally difficult due to the scarcity of large-scale data in those languages.

Sentence Sentiment Analysis

Socratic Reasoning Improves Positive Text Rewriting

no code implementations5 Mar 2024 Anmol Goel, Nico Daheim, Iryna Gurevych

In this work, we address this gap by augmenting open-source datasets for positive text rewriting with synthetically-generated Socratic rationales using a novel framework called \textsc{SocraticReframe}.

Language Modelling Large Language Model

Multimodal Large Language Models to Support Real-World Fact-Checking

no code implementations6 Mar 2024 Jiahui Geng, Yova Kementchedjhieva, Preslav Nakov, Iryna Gurevych

To the best of our knowledge, we are the first to evaluate MLLMs for real-world fact-checking.

Fact Checking

Early Period of Training Impacts Out-of-Distribution Generalization

no code implementations22 Mar 2024 Chen Cecilia Liu, Iryna Gurevych

Prior research has found that differences in the early period of neural network training significantly impact the performance of in-distribution (ID) tasks.

Out-of-Distribution Generalization

Constrained C-Test Generation via Mixed-Integer Programming

1 code implementation12 Apr 2024 Ji-Ung Lee, Marc E. Pfetsch, Iryna Gurevych

This work proposes a novel method to generate C-Tests; a deviated form of cloze tests (a gap filling exercise) where only the last part of a word is turned into a gap.

Enabling Natural Zero-Shot Prompting on Encoder Models via Statement-Tuning

no code implementations19 Apr 2024 Ahmed Elshabrawy, Yongxin Huang, Iryna Gurevych, Alham Fikri Aji

While Large Language Models (LLMs) exhibit remarkable capabilities in zero-shot and few-shot scenarios, they often require computationally prohibitive sizes.

Zero-shot Generalization

Holmes: Benchmark the Linguistic Competence of Language Models

no code implementations29 Apr 2024 Andreas Waldis, Yotam Perlitz, Leshem Choshen, Yufang Hou, Iryna Gurevych

We introduce Holmes, a benchmark to assess the linguistic competence of language models (LMs) - their ability to grasp linguistic phenomena.

Multimodal Grounding for Language Processing

1 code implementation COLING 2018 Lisa Beinborn, Teresa Botschen, Iryna Gurevych

This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language.

Fast Concept Mention Grouping for Concept Map-based Multi-Document Summarization

1 code implementation NAACL 2019 Tobias Falke, Iryna Gurevych

Concept map-based multi-document summarization has recently been proposed as a variant of the traditional summarization task with graph-structured summaries.

Clustering Document Summarization +1

Predicting the Humorousness of Tweets Using Gaussian Process Preference Learning

1 code implementation3 Aug 2020 Tristan Miller, Erik-Lân Do Dinh, Edwin Simpson, Iryna Gurevych

Most humour processing systems to date make at best discrete, coarse-grained distinctions between the comical and the conventional, yet such notions are better conceptualized as a broad spectrum.

Arithmetic-Based Pretraining -- Improving Numeracy of Pretrained Language Models

2 code implementations13 May 2022 Dominic Petrak, Nafise Sadat Moosavi, Iryna Gurevych

In this paper, we propose a new extended pretraining approach called Arithmetic-Based Pretraining that jointly addresses both in one extended pretraining step without requiring architectural changes or pretraining from scratch.

Contrastive Learning Reading Comprehension +1

Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5

1 code implementation31 Oct 2022 Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, Aline Villavicencio, Iryna Gurevych

We compare sequential fine-tuning with a model for multi-task learning in the context where we are interested in boosting performance on two tasks, one of which depends on the other.

Multi-Task Learning Natural Language Inference

Learning from Emotions, Demographic Information and Implicit User Feedback in Task-Oriented Document-Grounded Dialogues

1 code implementation17 Jan 2024 Dominic Petrak, Thy Thy Tran, Iryna Gurevych

The success of task-oriented and document-grounded dialogue systems depends on users accepting and enjoying using them.

Challenges in the Automatic Analysis of Students' Diagnostic Reasoning

1 code implementation26 Nov 2018 Claudia Schulz, Christian M. Meyer, Michael Sailer, Jan Kiesewetter, Elisabeth Bauer, Frank Fischer, Martin R. Fischer, Iryna Gurevych

We aim to enable the large-scale adoption of diagnostic reasoning analysis and feedback by automating the epistemic activity identification.

Semi-automatic Detection of Cross-lingual Marketing Blunders based on Pragmatic Label Propagation in Wiktionary

1 code implementation COLING 2016 Christian M. Meyer, Judith Eckle-Kohler, Iryna Gurevych

We introduce the task of detecting cross-lingual marketing blunders, which occur if a trade name resembles an inappropriate or negatively connotated word in a target language.

Marketing

Preference-based Interactive Multi-Document Summarisation

1 code implementation7 Jun 2019 Yang Gao, Christian M. Meyer, Iryna Gurevych

Interactive NLP is a promising paradigm to close the gap between automatic NLP systems and the human upper bound.

Active Learning reinforcement-learning +1

Investigating label suggestions for opinion mining in German Covid-19 social media

1 code implementation ACL 2021 Tilman Beck, Ji-Ung Lee, Christina Viehmann, Marcus Maurer, Oliver Quiring, Iryna Gurevych

This work investigates the use of interactively updated label suggestions to improve upon the efficiency of gathering annotations on the task of opinion mining in German Covid-19 social media data.

Opinion Mining Transfer Learning

Yes-Yes-Yes: Proactive Data Collection for ACL Rolling Review and Beyond

1 code implementation27 Jan 2022 Nils Dycke, Ilia Kuznetsov, Iryna Gurevych

The shift towards publicly available text sources has enabled language processing at unprecedented scale, yet leaves under-serviced the domains where public and openly licensed data is scarce.

Delving Deeper into Cross-lingual Visual Question Answering

1 code implementation15 Feb 2022 Chen Liu, Jonas Pfeiffer, Anna Korhonen, Ivan Vulić, Iryna Gurevych

2) We analyze cross-lingual VQA across different question types of varying complexity for different multilingual multimodal Transformers, and identify question types that are the most difficult to improve on.

Inductive Bias Question Answering +1

Learning From Free-Text Human Feedback -- Collect New Datasets Or Extend Existing Ones?

1 code implementation24 Oct 2023 Dominic Petrak, Nafise Sadat Moosavi, Ye Tian, Nikolai Rozanov, Iryna Gurevych

Learning from free-text human feedback is essential for dialog systems, but annotated data is scarce and usually covers only a small fraction of error types known in conversational AI.

Chatbot Response Generation +1

Exploring Jiu-Jitsu Argumentation for Writing Peer Review Rebuttals

1 code implementation7 Nov 2023 Sukannya Purkayastha, Anne Lauscher, Iryna Gurevych

In this work, we are the first to explore Jiu-Jitsu argumentation for peer review by proposing the novel task of attitude and theme-guided rebuttal generation.

Sentence

Dior-CVAE: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation

1 code implementation24 May 2023 Tianyu Yang, Thy Thy Tran, Iryna Gurevych

These models also suffer from posterior collapse, i. e., the decoder tends to ignore latent variables and directly access information captured in the encoder through the cross-attention mechanism.

Decoder Open-Domain Dialog +1

Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic Prompting

1 code implementation13 Sep 2023 Tilman Beck, Hendrik Schuff, Anne Lauscher, Iryna Gurevych

However, the available NLP literature disagrees on the efficacy of this technique - it remains unclear for which tasks and scenarios it can help, and the role of the individual factors in sociodemographic prompting is still unexplored.

Hate Speech Detection Zero-Shot Learning

A Streamlined Method for Sourcing Discourse-level Argumentation Annotations from the Crowd

1 code implementation NAACL 2019 Tristan Miller, Maria Sukhareva, Iryna Gurevych

The study of argumentation and the development of argument mining tools depends on the availability of annotated data, which is challenging to obtain in sufficient quantity and quality.

Argument Mining

Interactive Text Ranking with Bayesian Optimisation: A Case Study on Community QA and Summarisation

1 code implementation22 Nov 2019 Edwin Simpson, Yang Gao, Iryna Gurevych

For many NLP applications, such as question answering and summarisation, the goal is to select the best solution from a large space of candidates to meet a particular user's needs.

Bayesian Optimisation Community Question Answering +1

UNKs Everywhere: Adapting Multilingual Language Models to New Scripts

2 code implementations EMNLP 2021 Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder

The ultimate challenge is dealing with under-resourced languages not covered at all by the models and written in scripts unseen during pretraining.

Cross-Lingual Transfer

Annotation Curricula to Implicitly Train Non-Expert Annotators

1 code implementation CL (ACL) 2022 Ji-Ung Lee, Jan-Christoph Klie, Iryna Gurevych

Annotation studies often require annotators to familiarize themselves with the task, its annotation scheme, and the data domain.

Sentence

TexPrax: A Messaging Application for Ethical, Real-time Data Collection and Annotation

1 code implementation16 Aug 2022 Lorenz Stangier, Ji-Ung Lee, Yuxi Wang, Marvin Müller, Nicholas Frick, Joachim Metternich, Iryna Gurevych

We evaluate TexPrax in a user-study with German factory employees who ask their colleagues for solutions on problems that arise during their daily work.

Chatbot Sentence

CiteBench: A benchmark for Scientific Citation Text Generation

1 code implementation19 Dec 2022 Martin Funkquist, Ilia Kuznetsov, Yufang Hou, Iryna Gurevych

To address this challenge, we propose CiteBench: a benchmark for citation text generation that unifies multiple diverse datasets and enables standardized evaluation of citation text generation models across task designs and domains.

Text Generation

Are Multilingual LLMs Culturally-Diverse Reasoners? An Investigation into Multicultural Proverbs and Sayings

1 code implementation15 Sep 2023 Chen Cecilia Liu, Fajri Koto, Timothy Baldwin, Iryna Gurevych

Large language models (LLMs) are highly adept at question answering and reasoning tasks, but when reasoning in a situational context, human expectations vary depending on the relevant cultural common ground.

Question Answering

Measuring Pointwise $\mathcal{V}$-Usable Information In-Context-ly

1 code implementation18 Oct 2023 Sheng Lu, Shan Chen, Yingya Li, Danielle Bitterman, Guergana Savova, Iryna Gurevych

In-context learning (ICL) is a new learning paradigm that has gained popularity along with the development of large language models.

In-Context Learning

Bridging the gap between extractive and abstractive summaries: Creation and evaluation of coherent extracts from heterogeneous sources

1 code implementation COLING 2016 Darina Benikova, Margot Mieskes, Christian M. Meyer, Iryna Gurevych

Coherent extracts are a novel type of summary combining the advantages of manually created abstractive summaries, which are fluent but difficult to evaluate, and low-quality automatically created extractive summaries, which lack coherence and structure.

Document Summarization Multi-Document Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.