1 code implementation • 24 Apr 2023 • Julian Senoner, Bernhard Kratzwald, Milan Kuzmanovic, Torbjørn H. Netland, Stefan Feuerriegel
We empirically validate our proposed approach using real-world data from a job shop production that supplies large metal components to an oil platform construction yard.
1 code implementation • 19 Oct 2022 • Zhenrui Yue, Huimin Zeng, Bernhard Kratzwald, Stefan Feuerriegel, Dong Wang
Unlike existing approaches, we generate pseudo labels and propose to train the model via a novel attention-based contrastive adaptation method.
1 code implementation • EMNLP 2021 • Zhenrui Yue, Bernhard Kratzwald, Stefan Feuerriegel
Here, we train a QA system on both source data and generated data from the target domain with a contrastive adaptation loss that is incorporated in the training objective.
no code implementations • 1 Jan 2021 • Malte Ebner, Bernhard Kratzwald, Stefan Feuerriegel
As this approach can incorporate any active learning agent into its ensemble, it allows to increase the performance of every active learning agent by learning how to combine it with others.
1 code implementation • COLING 2020 • Bernhard Kratzwald, Guo Kunpeng, Stefan Feuerriegel, Dennis Diefenbach
(ii) Our system is designed such that it continuously learns during the KB completion task and, therefore, significantly improves its performance upon initial zero- and few-shot relations over time.
1 code implementation • EMNLP 2020 • Bernhard Kratzwald, Stefan Feuerriegel, Huan Sun
State-of-the-art question answering (QA) relies upon large amounts of training data for which labeling is time consuming and thus expensive.
no code implementations • 6 Mar 2020 • Bernhard Kratzwald, Xiang Yue, Huan Sun, Stefan Feuerriegel
Here, remarkably, annotating a stratified subset with only 1. 2% of the original training set achieves 97. 7% of the performance as if the complete dataset was annotated.
1 code implementation • ACL 2019 • Bernhard Kratzwald, Anna Eigenmann, Stefan Feuerriegel
The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process: first, relevant text passages are retrieved and, subsequently, a neural network for machine comprehension extracts the likeliest answer.
no code implementations • 29 Jan 2019 • Nil-Jana Akpinar, Bernhard Kratzwald, Stefan Feuerriegel
As our primary contribution, this is the first work that upper bounds the sample complexity for learning real-valued RNNs.
1 code implementation • EMNLP 2018 • Bernhard Kratzwald, Stefan Feuerriegel
State-of-the-art systems in deep question answering proceed as follows: (1) an initial document retrieval selects relevant documents, which (2) are then processed by a neural network in order to extract the final answer.
no code implementations • 19 Apr 2018 • Bernhard Kratzwald, Stefan Feuerriegel
Traditional information retrieval (such as that offered by web search engines) impedes users with information overload from extensive result pages and the need to manually locate the desired information therein.
no code implementations • 16 Mar 2018 • Bernhard Kratzwald, Suzana Ilic, Mathias Kraus, Stefan Feuerriegel, Helmut Prendinger
Emotions widely affect human decision-making.
no code implementations • 4 Dec 2017 • Zhiwu Huang, Bernhard Kratzwald, Danda Pani Paudel, Jiqing Wu, Luc van Gool
This paper presents a new problem of unpaired face translation between images and videos, which can be applied to facial video prediction and enhancement.
1 code implementation • 30 Nov 2017 • Bernhard Kratzwald, Zhiwu Huang, Danda Pani Paudel, Acharya Dinesh, Luc van Gool
In this paper, we aim to improve the state-of-the-art video generative adversarial networks (GANs) with a view towards multi-functional applications.