no code implementations • 25 Nov 2023 • Chia-Chien Hung, Wiem Ben Rim, Lindsay Frost, Lars Bruckner, Carolin Lawrence
High-risk domains pose unique challenges that require language models to provide accurate and safe responses.
1 code implementation • 23 Oct 2023 • Gorjan Radevski, Kiril Gashteovski, Chia-Chien Hung, Carolin Lawrence, Goran Glavaš
Open Information Extraction (OIE) methods extract facts from natural language text in the form of ("subject"; "relation"; "object") triples.
1 code implementation • 2 Jul 2023 • Vijay Viswanathan, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu, Graham Neubig
In this paper, we ask whether a large language model can amplify an expert's guidance to enable query-efficient, few-shot semi-supervised text clustering.
no code implementations • 3 Apr 2023 • Zhao Xu, Carolin Lawrence, Ammar Shaker, Raman Siarheyeu
To address these issues, we propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework, and models predictive uncertainty of node classification with Bayesian confidence of predictive probability and uncertainty of messages.
no code implementations • 10 Dec 2022 • Cheng Wang, Carolin Lawrence, Mathias Niepert
We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications.
1 code implementation • 1 Dec 2022 • Ammar Shaker, Carolin Lawrence
With the rise of machine learning, survival analysis can be modeled as learning a function that maps studied patients to their survival times.
no code implementations • 23 Aug 2022 • Haris Widjaja, Kiril Gashteovski, Wiem Ben Rim, PengFei Liu, Christopher Malon, Daniel Ruffinelli, Carolin Lawrence, Graham Neubig
Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples.
no code implementations • 10 Jul 2022 • Bhushan Kotnis, Kiril Gashteovski, Julia Gastinger, Giuseppe Serra, Francesco Alesiani, Timo Sztyler, Ammar Shaker, Na Gong, Carolin Lawrence, Zhao Xu
With Human-Centric Research (HCR) we can steer research activities so that the research outcome is beneficial for human stakeholders, such as end users.
no code implementations • 25 May 2022 • Sascha Saralajew, Ammar Shaker, Zhao Xu, Kiril Gashteovski, Bhushan Kotnis, Wiem Ben Rim, Jürgen Quittek, Carolin Lawrence
Inspired by the Turing test, we introduce a human-centric assessment framework where a leading domain expert accepts or rejects the solutions of an AI system and another domain expert.
no code implementations • ACL 2022 • Bhushan Kotnis, Kiril Gashteovski, Daniel Oñoro Rubio, Vanesa Rodriguez-Tembras, Ammar Shaker, Makoto Takamoto, Mathias Niepert, Carolin Lawrence
In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction.
1 code implementation • ACL 2022 • Niklas Friedrich, Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, Goran Glavaš
Open Information Extraction (OIE) is the task of extracting facts from sentences in the form of relations and their corresponding arguments in schema-free manner.
1 code implementation • ACL 2022 • Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, Goran Glavaš
In this work, we introduce BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese, and German.
Ranked #1 on Open Information Extraction on BenchIE
no code implementations • 25 Jun 2021 • Jun Cheng, Carolin Lawrence, Mathias Niepert
In contrast, we propose VEGN, which models variant effect prediction using a graph neural network (GNN) that operates on a heterogeneous graph with genes and variants.
no code implementations • AKBC 2021 • Wiem Ben Rim, Carolin Lawrence, Kiril Gashteovski, Mathias Niepert, Naoaki Okazaki
With an extensive set of experiments, we perform and analyze these tests for several KGE models.
no code implementations • ICLR 2021 • Cheng Wang, Carolin Lawrence, Mathias Niepert
Uncertainty quantification is crucial for building reliable and trustable machine learning systems.
no code implementations • ACL (spnlp) 2021 • Julia Kreutzer, Stefan Riezler, Carolin Lawrence
Large volumes of interaction logs can be collected from NLP systems that are deployed in the real world.
1 code implementation • 12 Oct 2020 • Carolin Lawrence, Timo Sztyler, Mathias Niepert
Moreover, we show theoretically that the difference between gradient rollback's influence approximation and the true influence on a model's behavior is smaller than known bounds on the stability of stochastic gradient descent.
no code implementations • 6 Apr 2020 • Bhushan Kotnis, Carolin Lawrence, Mathias Niepert
Representation learning for knowledge graphs (KGs) has focused on the problem of answering simple link prediction queries.
1 code implementation • IJCNLP 2019 • Carolin Lawrence, Bhushan Kotnis, Mathias Niepert
Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token.
1 code implementation • TACL 2019 • Laura Jehl, Carolin Lawrence, Stefan Riezler
We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and minimum risk training (MRT) on both weakly supervised tasks, as well as on a supervised machine translation task.
1 code implementation • 29 Nov 2018 • Carolin Lawrence, Stefan Riezler
In semantic parsing for question-answering, it is often too expensive to collect gold parses or even gold answers as supervision signals.
1 code implementation • ACL 2018 • Carolin Lawrence, Stefan Riezler
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system.
no code implementations • 23 Nov 2017 • Carolin Lawrence, Pratik Gajane, Stefan Riezler
Counterfactual learning is a natural scenario to improve web-based machine translation services by offline learning from feedback logged during user interactions.
no code implementations • EMNLP 2017 • Carolin Lawrence, Artem Sokolov, Stefan Riezler
The goal of counterfactual learning for statistical machine translation (SMT) is to optimize a target SMT system from logged data that consist of user feedback to translations that were predicted by another, historic SMT system.
no code implementations • 28 Jul 2017 • Carolin Lawrence, Artem Sokolov, Stefan Riezler
The goal of counterfactual learning for statistical machine translation (SMT) is to optimize a target SMT system from logged data that consist of user feedback to translations that were predicted by another, historic SMT system.
no code implementations • COLING 2016 • Carolin Lawrence, Stefan Riezler
We present a Natural Language Interface (nlmaps. cl. uni-heidelberg. de) to query OpenStreetMap.