1 code implementation • 7 Mar 2024 • Aneta Koleva, Martin Ringsquandl, Ahmed Hatem, Thomas Runkler, Volker Tresp
Finally, we propose a prompting framework for evaluating the newly developed large language models (LLMs) on this novel TI task.
no code implementations • 16 Jun 2023 • Phillip Swazinna, Steffen Udluft, Thomas Runkler
Recently, offline RL algorithms have been proposed that remain adaptive at runtime.
no code implementations • 23 Dec 2022 • Anna Himmelhuber, Dominik Dold, Stephan Grimm, Sonja Zillner, Thomas Runkler
Machine learning (ML) on graph-structured data has recently received deepened interest in the context of intrusion detection in the cybersecurity domain.
Decision Making Explainable Artificial Intelligence (XAI) +3
1 code implementation • 15 Nov 2022 • Dickson Odhiambo Owuor, Thomas Runkler, Anne Laurent
In addition, we present a systematic study of several meta-heuristic optimization techniques as efficient solutions to the problem of finding gradual patterns using our search space.
1 code implementation • 31 Aug 2022 • Dickson Odhiambo Owuor, Thomas Runkler, Anne Laurent, Joseph Orero, Edmond Menya
Gradual pattern extraction is a field in (KDD) Knowledge Discovery in Databases that maps correlations between attributes of a data set as gradual dependencies.
1 code implementation • 4 Aug 2022 • Dominik Dold, Josep Soler Garrido, Victor Caceres Chian, Marcel Hildebrandt, Thomas Runkler
Knowledge graphs are an expressive and widely used data structure due to their ability to integrate data from different domains in a sensible and machine-readable way.
1 code implementation • 21 May 2022 • Phillip Swazinna, Steffen Udluft, Thomas Runkler
At the same time, offline RL algorithms are not able to tune their most important hyperparameter - the proximity of the learned policy to the original policy.
1 code implementation • 18 Feb 2022 • Haoyu Ren, Darko Anicic, Thomas Runkler
Tiny machine learning (TinyML) has gained widespread popularity where machine learning (ML) is democratized on ubiquitous microcontrollers, processing sensor data everywhere in real-time.
1 code implementation • 14 Jan 2022 • Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler
Offline reinforcement learning (RL) Algorithms are often designed with environments such as MuJoCo in mind, in which the planning horizon is extremely long and no noise exists.
no code implementations • 3 Dec 2021 • Anna Himmelhuber, Stephan Grimm, Sonja Zillner, Mitchell Joblin, Martin Ringsquandl, Thomas Runkler
Similarly to other connectionist models, Graph Neural Networks (GNNs) lack transparency in their decision-making.
no code implementations • 26 Nov 2021 • Phillip Swazinna, Steffen Udluft, Thomas Runkler
Recently developed offline reinforcement learning algorithms have made it possible to learn policies directly from pre-collected datasets, giving rise to a new dilemma for practitioners: Since the performance the algorithms are able to deliver depends greatly on the dataset that is presented to them, practitioners need to pick the right dataset among the available ones.
no code implementations • 25 Nov 2021 • Anna Himmelhuber, Mitchell Joblin, Martin Ringsquandl, Thomas Runkler
Graph neural networks (GNNs) are quickly becoming the standard approach for learning on graph structured data across several domains, but they lack transparency in their decision-making.
no code implementations • 25 Nov 2021 • Anna Himmelhuber, Stephan Grimm, Thomas Runkler, Sonja Zillner
The increasing importance of resource-efficient production entails that manufacturing companies have to create a more dynamic production environment, with flexible manufacturing machines and processes.
1 code implementation • 9 Oct 2021 • Ahmed Frikha, Haokun Chen, Denis Krompaß, Thomas Runkler, Volker Tresp
In particular, we address the question: How can knowledge contained in models trained on different source domains be merged into a single model that generalizes well to unseen target domains, in the absence of source and target domain data?
1 code implementation • 21 Sep 2021 • Victor Caceres Chian, Marcel Hildebrandt, Thomas Runkler, Dominik Dold
Over the recent years, a multitude of different graph neural network architectures demonstrated ground-breaking performances in many learning tasks.
1 code implementation • 12 Jul 2021 • Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler
In offline reinforcement learning, a policy needs to be learned from a single pre-collected dataset.
no code implementations • 4 May 2021 • Haoyu Ren, Darko Anicic, Thomas Runkler
Focusing on comprehensive networking, big data, and artificial intelligence, the Industrial Internet-of-Things (IIoT) facilitates efficiency and robustness in factory operations.
no code implementations • 15 Mar 2021 • Haoyu Ren, Darko Anicic, Thomas Runkler
The neural network is first trained using a large amount of pre-collected data on a powerful machine and then flashed to MCUs.
no code implementations • 1 Jan 2021 • Hiba Arnout, Johanna Bronner, Thomas Runkler
We prove that our model outperforms the state-of-the-art generative models and leads to a significant and consistent improvement in the quality of the generated time series while at the same time preserving the classes and the variation of the original dataset.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Yatin Chaudhary, Pankaj Gupta, Khushbu Saxena, Vivek Kulkarni, Thomas Runkler, Hinrich Schütze
Our work thus focuses on optimizing the computational cost of fine-tuning for document classification.
no code implementations • 12 Aug 2020 • Phillip Swazinna, Steffen Udluft, Thomas Runkler
State-of-the-art reinforcement learning algorithms mostly rely on being allowed to directly interact with their environment to collect millions of observations.
1 code implementation • ICML 2020 • Pankaj Gupta, Yatin Chaudhary, Thomas Runkler, Hinrich Schütze
To address the problem, we propose a lifelong learning framework for neural topic modeling that can continuously process streams of document collections, accumulate topics and guide future topic modeling tasks by knowledge transfer from several sources to better deal with the sparse data.
no code implementations • 23 Dec 2019 • Hiba Arnout, Johannes Kehrer, Johanna Bronner, Thomas Runkler
This is particularly true when parts of the training data have been artificially generated to overcome common training problems such as lack of data or imbalanced dataset.
no code implementations • 29 Sep 2019 • Yatin Chaudhary, Pankaj Gupta, Thomas Runkler
in topic modeling, (2) A novel lifelong learning mechanism into neural topic modeling framework to demonstrate continuous learning in sequential document collections and minimizing catastrophic forgetting.
no code implementations • WS 2019 • Pankaj Gupta, Khushbu Saxena, Usama Yaseen, Thomas Runkler, Hinrich Schütze
To address the tasks of sentence (SLC) and fragment level (FLC) propaganda detection, we explore different neural architectures (e. g., CNN, LSTM-CRF and BERT) and extract linguistic (e. g., part-of-speech, named entity, readability, sentiment, emotion, etc.
no code implementations • 10 Jul 2019 • Markus Kaiser, Clemens Otte, Thomas Runkler, Carl Henrik Ek
In this paper, we present a Bayesian view on model-based reinforcement learning.
Model-based Reinforcement Learning reinforcement-learning +2
no code implementations • 16 Oct 2018 • Markus Kaiser, Clemens Otte, Thomas Runkler, Carl Henrik Ek
The data association problem is concerned with separating data coming from different generating processes, for example when data come from different data sources, contain significant noise, or exhibit multimodality.
1 code implementation • 11 Oct 2018 • Pankaj Gupta, Subburam Rajaram, Hinrich Schütze, Bernt Andrassy, Thomas Runkler
iDepNN models the shortest and augmented dependency paths via recurrent and recursive neural networks to extract relationships within (intra-) and across (inter-) sentence boundaries.
Ranked #1 on Relation Extraction on MUC6
no code implementations • 10 Dec 2017 • Stefan Depeweg, José Miguel Hernández-Lobato, Steffen Udluft, Thomas Runkler
We derive a novel sensitivity analysis of input variables for predictive epistemic and aleatoric uncertainty.
no code implementations • NeurIPS 2018 • Markus Kaiser, Clemens Otte, Thomas Runkler, Carl Henrik Ek
We apply the method to the real-world problem of finding common structure in the sensor data of wind turbines introduced by the underlying latent and turbulent wind field.
no code implementations • 19 Oct 2016 • Daniel Hein, Alexander Hentschel, Thomas Runkler, Steffen Udluft
To the best of our knowledge, this approach is the first to relate self-organizing fuzzy controllers to model-based batch RL.