no code implementations • 18 Jul 2024 • Elie Aljalbout, Nikolaos Sotirakis, Patrick van der Smagt, Maximilian Karl, Nutan Chen
Our results highlight the benefits of using language-driven task representations for world models and a clear advantage of model-based multi-task learning over the more common model-free paradigm.
no code implementations • 4 Apr 2024 • Yin Li, Qi Chen, Kai Wang, Meige Li, Liping Si, Yingwei Guo, Yu Xiong, Qixing Wang, Yang Qin, Ling Xu, Patrick van der Smagt, Jun Tang, Nutan Chen
Multi-modality magnetic resonance imaging data with various sequences facilitate the early diagnosis, tumor segmentation, and disease staging in the management of nasopharyngeal carcinoma (NPC).
no code implementations • 22 Mar 2024 • Nutan Chen, Elie Aljalbout, Botond Cseke, Patrick van der Smagt
This integration facilitates rapid adaptation to new tasks and optimizes the utilization of accumulated expertise by allowing robots to learn and generalize from demonstrated trajectories.
1 code implementation • 21 Mar 2024 • Xudong Sun, Carla Feistner, Alexej Gossmann, George Schwarz, Rao Muhammad Umer, Lisa Beer, Patrick Rockenschaub, Rahul Babu Shrestha, Armin Gruber, Nutan Chen, Sayedali Shetab Boushehri, Florian Buettner, Carsten Marr
DomainLab is a modular Python package for training user specified neural networks with composable regularization loss terms.
1 code implementation • 20 Mar 2024 • Xudong Sun, Nutan Chen, Alexej Gossmann, Yu Xing, Carla Feistner, Emilio Dorigatt, Felix Drost, Daniele Scarcella, Lisa Beer, Carsten Marr
We address the online combinatorial choice of weight multipliers for multi-objective optimization of many loss terms parameterized by neural works via a probabilistic graphical model (PGM) for the joint model parameter and multiplier evolution process, with a hypervolume based likelihood promoting multi-objective descent.
1 code implementation • 21 Jan 2024 • Yin Li, Yu Xiong, Wenxin Fan, Kai Wang, Qingqing Yu, Liping Si, Patrick van der Smagt, Jun Tang, Nutan Chen
How to enhance the adherence of patients to maximize the benefit of allergen immunotherapy (AIT) plays a crucial role in the management of AIT.
no code implementations • 20 Sep 2022 • Wolfgang Kerzendorf, Nutan Chen, Jack O'Brien, Johannes Buchner, Patrick van der Smagt
Supernova spectral time series can be used to reconstruct a spatially resolved explosion model known as supernova tomography.
no code implementations • 13 Jun 2022 • Nutan Chen, Patrick van der Smagt, Botond Cseke
Auto-encoder models that preserve similarities in the data are a popular tool in representation learning.
no code implementations • 23 Feb 2022 • Nutan Chen, Djalel Benbouzid, Francesco Ferroni, Mathis Nitschke, Luciano Pinna, Patrick van der Smagt
We therefore consider a music-generating algorithm as a counterpart to a human musician, in a setting where reciprocal interplay is to lead to new experiences, both for the musician and the audience.
no code implementations • ICML 2020 • Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry.
no code implementations • 25 Sep 2019 • Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space.
no code implementations • 9 Sep 2019 • Nutan Chen, Göran Westling, Benoni B. Edin, Patrick van der Smagt
In addition, compared with previous single finger estimation in an experimental environment, we extend the approach to multiple finger force estimation, which can be used for applications such as human grasping analysis.
no code implementations • 23 Aug 2019 • Alexej Klushyn, Nutan Chen, Botond Cseke, Justin Bayer, Patrick van der Smagt
We address the problem of one-to-many mappings in supervised learning, where a single instance has many different solutions of possibly equal cost.
no code implementations • NeurIPS 2019 • Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, Patrick van der Smagt
We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution.
no code implementations • 19 Dec 2018 • Nutan Chen, Francesco Ferroni, Alexej Klushyn, Alexandros Paraschos, Justin Bayer, Patrick van der Smagt
The length of the geodesic between two data points along a Riemannian manifold, induced by a deep generative model, yields a principled measure of similarity.
no code implementations • 6 Aug 2018 • Nutan Chen, Alexej Klushyn, Alexandros Paraschos, Djalel Benbouzid, Patrick van der Smagt
It relies on the Jacobian of the likelihood to detect non-smooth transitions in the latent space, i. e., transitions that lead to abrupt changes in the movement of the robot.
no code implementations • 3 Nov 2017 • Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, Patrick van der Smagt
Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source---the latent space---to samples from a more complex distribution represented by a dataset.
1 code implementation • 4 Nov 2013 • Justin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen, Sebastian Urban, Patrick van der Smagt
Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data.