Search Results for author: Gabriel Kalweit

Found 18 papers, 4 papers with code

CellMixer: Annotation-free Semantic Cell Segmentation of Heterogeneous Cell Populations

no code implementations1 Dec 2023 Mehdi Naouar, Gabriel Kalweit, Anusha Klett, Yannick Vogt, Paula Silvestrini, Diana Laura Infante Ramirez, Roland Mertelsmann, Joschka Boedecker, Maria Kalweit

In recent years, several unsupervised cell segmentation methods have been presented, trying to omit the requirement of laborious pixel-level annotations for the training of a cell segmentation model.

Cell Segmentation Instance Segmentation +2

Stable Online and Offline Reinforcement Learning for Antibody CDRH3 Design

no code implementations29 Nov 2023 Yannick Vogt, Mehdi Naouar, Maria Kalweit, Christoph Cornelius Miething, Justus Duyster, Roland Mertelsmann, Gabriel Kalweit, Joschka Boedecker

The field of antibody-based therapeutics has grown significantly in recent years, with targeted antibodies emerging as a potentially effective approach to personalized therapies.

reinforcement-learning

Multi-intention Inverse Q-learning for Interpretable Behavior Representation

no code implementations23 Nov 2023 Hao Zhu, Brice De La Crompe, Gabriel Kalweit, Artur Schneider, Maria Kalweit, Ilka Diester, Joschka Boedecker

In advancing the understanding of decision-making processes, Inverse Reinforcement Learning (IRL) have proven instrumental in reconstructing animal's multiple intentions amidst complex behaviors.

Decision Making Q-Learning

Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles

no code implementations29 Mar 2023 Mehdi Naouar, Gabriel Kalweit, Ignacio Mastroleo, Philipp Poxleitner, Marc Metzger, Joschka Boedecker, Maria Kalweit

In this work, we put the focus back on tumor localization in form of a patch-level classification task and take up the setting of so-called coarse annotations, which provide greater training supervision while remaining feasible from a clinical standpoint.

Multiple Instance Learning whole slide images

Latent Plans for Task-Agnostic Offline Reinforcement Learning

1 code implementation19 Sep 2022 Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard

Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors.

Imitation Learning reinforcement-learning +1

NeuRL: Closed-form Inverse Reinforcement Learning for Neural Decoding

no code implementations10 Apr 2022 Gabriel Kalweit, Maria Kalweit, Mansour Alyahyay, Zoe Jaeckel, Florian Steenbergen, Stefanie Hardung, Thomas Brox, Ilka Diester, Joschka Boedecker

However, since generally there is a strong connection between learning of subjects and their expectations on long-term rewards, we propose NeuRL, an inverse reinforcement learning approach that (1) extracts an intrinsic reward function from collected trajectories of a subject in closed form, (2) maps neural signals to this intrinsic reward to account for long-term dependencies in the behavior and (3) predicts the simulated behavior for unseen neural signals by extracting Q-values and the corresponding Boltzmann policy based on the intrinsic reward values for these unseen neural signals.

reinforcement-learning Reinforcement Learning (RL)

Affordance Learning from Play for Sample-Efficient Policy Learning

1 code implementation1 Mar 2022 Jessica Borja-Diaz, Oier Mees, Gabriel Kalweit, Lukas Hermann, Joschka Boedecker, Wolfram Burgard

Robots operating in human-centered environments should have the ability to understand how objects function: what can be done with each object, where this interaction may occur, and how the object is used to achieve a goal.

Motion Planning Object +1

Robust and Data-efficient Q-learning by Composite Value-estimation

no code implementations29 Sep 2021 Gabriel Kalweit, Maria Kalweit, Joschka Boedecker

In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control.

Q-Learning

Amortized Q-learning with Model-based Action Proposals for Autonomous Driving on Highways

no code implementations6 Dec 2020 Branka Mirchevska, Maria Hügle, Gabriel Kalweit, Moritz Werling, Joschka Boedecker

Well-established optimization-based methods can guarantee an optimal trajectory for a short optimization horizon, typically no longer than a few seconds.

Autonomous Driving Decision Making +2

Deep Surrogate Q-Learning for Autonomous Driving

no code implementations21 Oct 2020 Maria Kalweit, Gabriel Kalweit, Moritz Werling, Joschka Boedecker

Challenging problems of deep reinforcement learning systems with regard to the application on real systems are their adaptivity to changing environments and their efficiency w. r. t.

Autonomous Driving Q-Learning

A Dynamic Deep Neural Network For Multimodal Clinical Data Analysis

no code implementations14 Aug 2020 Maria Hügle, Gabriel Kalweit, Thomas Huegle, Joschka Boedecker

Clinical data from electronic medical records, registries or trials provide a large source of information to apply machine learning methods in order to foster precision medicine, e. g. by finding new disease phenotypes or performing individual disease prediction.

BIG-bench Machine Learning Disease Prediction +1

Deep Inverse Q-learning with Constraints

2 code implementations NeurIPS 2020 Gabriel Kalweit, Maria Huegle, Moritz Werling, Joschka Boedecker

In this work, we introduce a novel class of algorithms that only needs to solve the MDP underlying the demonstrated behavior once to recover the expert policy.

Q-Learning

Deep Constrained Q-learning

no code implementations20 Mar 2020 Gabriel Kalweit, Maria Huegle, Moritz Werling, Joschka Boedecker

We analyze the advantages of Constrained Q-learning in the tabular case and compare Constrained DQN to reward shaping and Lagrangian methods in the application of high-level decision making in autonomous driving, considering constraints for safety, keeping right and comfort.

Autonomous Driving Decision Making +3

Adversarial Skill Networks: Unsupervised Robot Skill Learning from Video

1 code implementation21 Oct 2019 Oier Mees, Markus Merklinger, Gabriel Kalweit, Wolfram Burgard

Our method learns a general skill embedding independently from the task context by using an adversarial loss.

Continuous Control Metric Learning +4

Dynamic Interaction-Aware Scene Understanding for Reinforcement Learning in Autonomous Driving

no code implementations30 Sep 2019 Maria Huegle, Gabriel Kalweit, Moritz Werling, Joschka Boedecker

The common pipeline in autonomous driving systems is highly modular and includes a perception component which extracts lists of surrounding objects and passes these lists to a high-level decision component.

Autonomous Driving Decision Making +3

Composite Q-learning: Multi-scale Q-function Decomposition and Separable Optimization

no code implementations30 Sep 2019 Gabriel Kalweit, Maria Huegle, Joschka Boedecker

We prove that the combination of these short- and long-term predictions is a representation of the full return, leading to the Composite Q-learning algorithm.

Q-Learning

Off-policy Multi-step Q-learning

no code implementations25 Sep 2019 Gabriel Kalweit, Maria Huegle, Joschka Boedecker

In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control.

Q-Learning

Dynamic Input for Deep Reinforcement Learning in Autonomous Driving

no code implementations25 Jul 2019 Maria Huegle, Gabriel Kalweit, Branka Mirchevska, Moritz Werling, Joschka Boedecker

In many real-world decision making problems, reaching an optimal decision requires taking into account a variable number of objects around the agent.

Autonomous Driving Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.