Search Results for author: Joschka Boedecker

Found 43 papers, 13 papers with code

Hierarchical Insights: Exploiting Structural Similarities for Reliable 3D Semantic Segmentation

no code implementations9 Apr 2024 Mariella Dreissig, Florian Piewak, Joschka Boedecker

Safety-critical applications like autonomous driving call for robust 3D environment perception algorithms which can withstand highly diverse and ambiguous surroundings.

3D Semantic Segmentation Autonomous Driving +2

CellMixer: Annotation-free Semantic Cell Segmentation of Heterogeneous Cell Populations

no code implementations1 Dec 2023 Mehdi Naouar, Gabriel Kalweit, Anusha Klett, Yannick Vogt, Paula Silvestrini, Diana Laura Infante Ramirez, Roland Mertelsmann, Joschka Boedecker, Maria Kalweit

In recent years, several unsupervised cell segmentation methods have been presented, trying to omit the requirement of laborious pixel-level annotations for the training of a cell segmentation model.

Cell Segmentation Instance Segmentation +2

Stable Online and Offline Reinforcement Learning for Antibody CDRH3 Design

no code implementations29 Nov 2023 Yannick Vogt, Mehdi Naouar, Maria Kalweit, Christoph Cornelius Miething, Justus Duyster, Roland Mertelsmann, Gabriel Kalweit, Joschka Boedecker

The field of antibody-based therapeutics has grown significantly in recent years, with targeted antibodies emerging as a potentially effective approach to personalized therapies.

reinforcement-learning

Multi-intention Inverse Q-learning for Interpretable Behavior Representation

no code implementations23 Nov 2023 Hao Zhu, Brice De La Crompe, Gabriel Kalweit, Artur Schneider, Maria Kalweit, Ilka Diester, Joschka Boedecker

In advancing the understanding of decision-making processes, Inverse Reinforcement Learning (IRL) have proven instrumental in reconstructing animal's multiple intentions amidst complex behaviors.

Decision Making Q-Learning

On the Calibration of Uncertainty Estimation in LiDAR-based Semantic Segmentation

no code implementations4 Aug 2023 Mariella Dreissig, Florian Piewak, Joschka Boedecker

We propose a metric to measure the confidence calibration quality of a semantic segmentation model with respect to individual classes.

Autonomous Driving Segmentation +1

The Treachery of Images: Bayesian Scene Keypoints for Deep Policy Learning in Robotic Manipulation

1 code implementation8 May 2023 Jan Ole von Hartz, Eugenio Chisari, Tim Welschehold, Wolfram Burgard, Joschka Boedecker, Abhinav Valada

We employ our method to learn challenging multi-object robot manipulation tasks from wrist camera observations and demonstrate superior utility for policy learning compared to other representation learning techniques.

Representation Learning Robot Manipulation

Survey on LiDAR Perception in Adverse Weather Conditions

no code implementations13 Apr 2023 Mariella Dreissig, Dominik Scheuble, Florian Piewak, Joschka Boedecker

The active LiDAR sensor is able to create an accurate 3D representation of a scene, making it a valuable addition for environment perception for autonomous vehicles.

Autonomous Vehicles Denoising +1

Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles

no code implementations29 Mar 2023 Mehdi Naouar, Gabriel Kalweit, Ignacio Mastroleo, Philipp Poxleitner, Marc Metzger, Joschka Boedecker, Maria Kalweit

In this work, we put the focus back on tumor localization in form of a patch-level classification task and take up the setting of so-called coarse annotations, which provide greater training supervision while remaining feasible from a clinical standpoint.

Multiple Instance Learning whole slide images

Latent Plans for Task-Agnostic Offline Reinforcement Learning

1 code implementation19 Sep 2022 Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard

Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors.

Imitation Learning reinforcement-learning +1

Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization

1 code implementation5 Jul 2022 Yuan Zhang, Jianhong Wang, Joschka Boedecker

To deal with unknown uncertainty sets, we further propose a novel adversarial approach to generate them based on the value function.

Continuous Control reinforcement-learning +1

NeuRL: Closed-form Inverse Reinforcement Learning for Neural Decoding

no code implementations10 Apr 2022 Gabriel Kalweit, Maria Kalweit, Mansour Alyahyay, Zoe Jaeckel, Florian Steenbergen, Stefanie Hardung, Thomas Brox, Ilka Diester, Joschka Boedecker

However, since generally there is a strong connection between learning of subjects and their expectations on long-term rewards, we propose NeuRL, an inverse reinforcement learning approach that (1) extracts an intrinsic reward function from collected trajectories of a subject in closed form, (2) maps neural signals to this intrinsic reward to account for long-term dependencies in the behavior and (3) predicts the simulated behavior for unseen neural signals by extracting Q-values and the corresponding Boltzmann policy based on the intrinsic reward values for these unseen neural signals.

reinforcement-learning Reinforcement Learning (RL)

Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning

no code implementations21 Mar 2022 Branka Mirchevska, Moritz Werling, Joschka Boedecker

Implementing an autonomous vehicle that is able to output feasible, smooth and efficient trajectories is a long-standing challenge.

Autonomous Driving Offline RL +2

Affordance Learning from Play for Sample-Efficient Policy Learning

1 code implementation1 Mar 2022 Jessica Borja-Diaz, Oier Mees, Gabriel Kalweit, Lukas Hermann, Joschka Boedecker, Wolfram Burgard

Robots operating in human-centered environments should have the ability to understand how objects function: what can be done with each object, where this interaction may occur, and how the object is used to achieve a goal.

Motion Planning Object +1

Robust and Data-efficient Q-learning by Composite Value-estimation

no code implementations29 Sep 2021 Gabriel Kalweit, Maria Kalweit, Joschka Boedecker

In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control.

Q-Learning

Residual Feedback Learning for Contact-Rich Manipulation Tasks with Uncertainty

no code implementations8 Jun 2021 Alireza Ranjbar, Ngo Anh Vien, Hanna Ziesche, Joschka Boedecker, Gerhard Neumann

We propose a new formulation that addresses these limitations by also modifying the feedback signals to the controller with an RL policy and show superior performance of our approach on a contact-rich peg-insertion task under position and orientation uncertainty.

Position Reinforcement Learning (RL)

Amortized Q-learning with Model-based Action Proposals for Autonomous Driving on Highways

no code implementations6 Dec 2020 Branka Mirchevska, Maria Hügle, Gabriel Kalweit, Moritz Werling, Joschka Boedecker

Well-established optimization-based methods can guarantee an optimal trajectory for a short optimization horizon, typically no longer than a few seconds.

Autonomous Driving Decision Making +2

Deep Surrogate Q-Learning for Autonomous Driving

no code implementations21 Oct 2020 Maria Kalweit, Gabriel Kalweit, Moritz Werling, Joschka Boedecker

Challenging problems of deep reinforcement learning systems with regard to the application on real systems are their adaptivity to changing environments and their efficiency w. r. t.

Autonomous Driving Q-Learning

A Dynamic Deep Neural Network For Multimodal Clinical Data Analysis

no code implementations14 Aug 2020 Maria Hügle, Gabriel Kalweit, Thomas Huegle, Joschka Boedecker

Clinical data from electronic medical records, registries or trials provide a large source of information to apply machine learning methods in order to foster precision medicine, e. g. by finding new disease phenotypes or performing individual disease prediction.

BIG-bench Machine Learning Disease Prediction +1

Deep Inverse Q-learning with Constraints

2 code implementations NeurIPS 2020 Gabriel Kalweit, Maria Huegle, Moritz Werling, Joschka Boedecker

In this work, we introduce a novel class of algorithms that only needs to solve the MDP underlying the demonstrated behavior once to recover the expert policy.

Q-Learning

Deep Constrained Q-learning

no code implementations20 Mar 2020 Gabriel Kalweit, Maria Huegle, Moritz Werling, Joschka Boedecker

We analyze the advantages of Constrained Q-learning in the tabular case and compare Constrained DQN to reward shaping and Lagrangian methods in the application of high-level decision making in autonomous driving, considering constraints for safety, keeping right and comfort.

Autonomous Driving Decision Making +3

Machine-Learning-Based Diagnostics of EEG Pathology

1 code implementation11 Feb 2020 Lukas Alexander Wilhelm Gemein, Robin Tibor Schirrmeister, Patryk Chrabąszcz, Daniel Wilson, Joschka Boedecker, Andreas Schulze-Bonhage, Frank Hutter, Tonio Ball

The results demonstrate that the proposed feature-based decoding framework can achieve accuracies on the same level as state-of-the-art deep neural networks.

BIG-bench Machine Learning EEG

Composite Q-learning: Multi-scale Q-function Decomposition and Separable Optimization

no code implementations30 Sep 2019 Gabriel Kalweit, Maria Huegle, Joschka Boedecker

We prove that the combination of these short- and long-term predictions is a representation of the full return, leading to the Composite Q-learning algorithm.

Q-Learning

Dynamic Interaction-Aware Scene Understanding for Reinforcement Learning in Autonomous Driving

no code implementations30 Sep 2019 Maria Huegle, Gabriel Kalweit, Moritz Werling, Joschka Boedecker

The common pipeline in autonomous driving systems is highly modular and includes a perception component which extracts lists of surrounding objects and passes these lists to a high-level decision component.

Autonomous Driving Decision Making +3

Off-policy Multi-step Q-learning

no code implementations25 Sep 2019 Gabriel Kalweit, Maria Huegle, Joschka Boedecker

In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control.

Q-Learning

Dynamic Input for Deep Reinforcement Learning in Autonomous Driving

no code implementations25 Jul 2019 Maria Huegle, Gabriel Kalweit, Branka Mirchevska, Moritz Werling, Joschka Boedecker

In many real-world decision making problems, reaching an optimal decision requires taking into account a variable number of objects around the agent.

Autonomous Driving Decision Making +2

Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

1 code implementation27 Jun 2019 Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause

We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints.

Model Predictive Control reinforcement-learning +2

Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration

no code implementations18 Mar 2019 Jingwei Zhang, Niklas Wetzel, Nicolai Dorka, Joschka Boedecker, Wolfram Burgard

Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration.

Compact representations and pruning in residual networks

no code implementations20 Oct 2018 Fereshteh Lagzi, Tonio Ball, Joschka Boedecker

This criterion is based on the convergence of the neural dynamics in the last two successive layers of the residual block.

Early Seizure Detection with an Energy-Efficient Convolutional Neural Network on an Implantable Microcontroller

no code implementations12 Jun 2018 Maria Hügle, Simon Heller, Manuel Watter, Manuel Blum, Farrokh Manzouri, Matthias Dümpelmann, Andreas Schulze-Bonhage, Peter Woias, Joschka Boedecker

Most approaches for early seizure detection in the literature are, however, not optimized for implementation on ultra-low power microcontrollers required for long-term implantation.

EEG Seizure Detection

VR-Goggles for Robots: Real-to-sim Domain Adaptation for Visual Control

no code implementations1 Feb 2018 Jingwei Zhang, Lei Tai, Peng Yun, Yufeng Xiong, Ming Liu, Joschka Boedecker, Wolfram Burgard

In this paper, we deal with the reality gap from a novel perspective, targeting transferring Deep Reinforcement Learning (DRL) policies learned in simulated environments to the real-world domain for visual control tasks.

Domain Adaptation Style Transfer

Neural SLAM: Learning to Explore with External Memory

1 code implementation29 Jun 2017 Jingwei Zhang, Lei Tai, Ming Liu, Joschka Boedecker, Wolfram Burgard

We present an approach for agents to learn representations of a global map from sensor data, to aid their exploration in new environments.

Reinforcement Learning (RL) Simultaneous Localization and Mapping

A Survey of Deep Network Solutions for Learning Control in Robotics: From Reinforcement to Imitation

1 code implementation21 Dec 2016 Lei Tai, Jingwei Zhang, Ming Liu, Joschka Boedecker, Wolfram Burgard

We carry out our discussions on the two main paradigms for learning control with deep networks: deep reinforcement learning and imitation learning.

Imitation Learning reinforcement-learning +1

Deep Reinforcement Learning with Successor Features for Navigation across Similar Environments

no code implementations16 Dec 2016 Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, Wolfram Burgard

We propose a successor feature based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances.

reinforcement-learning Reinforcement Learning (RL) +1

Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images

1 code implementation NeurIPS 2015 Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin Riedmiller

We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images.

Guided Self-Organization of Input-Driven Recurrent Neural Networks

no code implementations6 Sep 2013 Oliver Obst, Joschka Boedecker

We review attempts that have been made towards understanding the computational properties and mechanisms of input-driven dynamical systems like RNNs, and reservoir computing networks in particular.

Cannot find the paper you are looking for? You can Submit a new open access paper.