Search Results for author: Wolfram Burgard

Found 106 papers, 46 papers with code

BEVCar: Camera-Radar Fusion for BEV Map and Object Segmentation

no code implementations18 Mar 2024 Jonas Schramm, Niclas Vödisch, Kürsat Petek, B Ravi Kiran, Senthil Yogamani, Wolfram Burgard, Abhinav Valada

Semantic scene segmentation from a bird's-eye-view (BEV) perspective plays a crucial role in facilitating planning and decision-making for mobile robots.

Decision Making Scene Segmentation +1

Single-Agent Actor Critic for Decentralized Cooperative Driving

no code implementations18 Mar 2024 Shengchao Yan, Lukas König, Wolfram Burgard

To bridge this gap and advance the field of active traffic management towards greater decentralization, we introduce a novel asymmetric actor-critic model aimed at learning decentralized cooperative driving policies for autonomous vehicles using single-agent reinforcement learning.

Autonomous Vehicles Management

CenterGrasp: Object-Aware Implicit Representation Learning for Simultaneous Shape Reconstruction and 6-DoF Grasp Estimation

1 code implementation13 Dec 2023 Eugenio Chisari, Nick Heppert, Tim Welschehold, Wolfram Burgard, Abhinav Valada

It consists of an RGB-D image encoder that leverages recent advances to detect objects and infer their pose and latent code, and a decoder to predict shape and grasps for each object in the scene.

Object Pose Estimation +2

Robot Skill Generalization via Keypoint Integrated Soft Actor-Critic Gaussian Mixture Models

no code implementations23 Oct 2023 Iman Nematollahi, Kirill Yankov, Wolfram Burgard, Tim Welschehold

A long-standing challenge for a robotic manipulation system operating in real-world scenarios is adapting and generalizing its acquired motor skills to unseen environments.

Skill Generalization Zero-shot Generalization

Few-Shot Panoptic Segmentation With Foundation Models

1 code implementation19 Sep 2023 Markus Käppeler, Kürsat Petek, Niclas Vödisch, Wolfram Burgard, Abhinav Valada

Concurrently, recent breakthroughs in visual representation learning have sparked a paradigm shift leading to the advent of large foundation models that can be trained with completely unlabeled images.

Panoptic Segmentation Representation Learning +1

A Smart Robotic System for Industrial Plant Supervision

no code implementations10 Aug 2023 D. Adriana Gómez-Rosal, Max Bergau, Georg K. J. Fischer, Andreas Wachaja, Johannes Gräter, Matthias Odenweller, Uwe Piechottka, Fabian Hoeflinger, Nikhil Gosala, Niklas Wetzel, Daniel Büscher, Abhinav Valada, Wolfram Burgard

In today's chemical plants, human field operators perform frequent integrity checks to guarantee high safety standards, and thus are possibly the first to encounter dangerous operating conditions.

Navigate

AutoGraph: Predicting Lane Graphs from Traffic Observations

1 code implementation27 Jun 2023 Jannik Zürn, Ingmar Posner, Wolfram Burgard

To overcome this limitation, we propose to use the motion patterns of traffic participants as lane graph annotations.

Autonomous Driving

End-to-end 2D-3D Registration between Image and LiDAR Point Cloud for Vehicle Localization

no code implementations20 Jun 2023 Guangming Wang, Yu Zheng, Yanfeng Guo, Zhe Liu, Yixiang Zhu, Wolfram Burgard, Hesheng Wang

A popular approach to robot localization is based on image-to-point cloud registration, which combines illumination-invariant LiDAR-based mapping with economical image-based localization.

Image-Based Localization Image to Point Cloud Registration

The Treachery of Images: Bayesian Scene Keypoints for Deep Policy Learning in Robotic Manipulation

1 code implementation8 May 2023 Jan Ole von Hartz, Eugenio Chisari, Tim Welschehold, Wolfram Burgard, Joschka Boedecker, Abhinav Valada

We employ our method to learn challenging multi-object robot manipulation tasks from wrist camera observations and demonstrate superior utility for policy learning compared to other representation learning techniques.

Representation Learning Robot Manipulation

Improving Deep Dynamics Models for Autonomous Vehicles with Multimodal Latent Mapping of Surfaces

no code implementations21 Mar 2023 Johan Vertens, Nicolai Dorka, Tim Welschehold, Michael Thompson, Wolfram Burgard

By training everything end-to-end with the loss of the dynamics model, we enforce the latent mapper to learn an update rule for the latent map that is useful for the subsequent dynamics model.

Autonomous Vehicles

Dynamic Update-to-Data Ratio: Minimizing World Model Overfitting

1 code implementation17 Mar 2023 Nicolai Dorka, Tim Welschehold, Wolfram Burgard

Early stopping based on the validation set performance is a popular approach to find the right balance between under- and overfitting in the context of supervised learning.

Model-based Reinforcement Learning reinforcement-learning +1

Audio Visual Language Maps for Robot Navigation

no code implementations13 Mar 2023 Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard

While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments.

Navigate Robot Navigation

EvCenterNet: Uncertainty Estimation for Object Detection using Evidential Learning

no code implementations6 Mar 2023 Monish R. Nallapareddy, Kshitij Sirohi, Paulo L. J. Drews-Jr, Wolfram Burgard, Chih-Hong Cheng, Abhinav Valada

In this work, we propose EvCenterNet, a novel uncertainty-aware 2D object detection framework using evidential learning to directly estimate both classification and regression uncertainties.

Decision Making object-detection +2

Learning and Aggregating Lane Graphs for Urban Automated Driving

no code implementations CVPR 2023 Martin Büchner, Jannik Zürn, Ion-George Todoran, Abhinav Valada, Wolfram Burgard

To overcome these challenges, we propose a novel bottom-up approach to lane graph estimation from aerial imagery that aggregates multiple overlapping graphs into a single consistent graph.

SkyEye: Self-Supervised Bird's-Eye-View Semantic Mapping Using Monocular Frontal View Images

no code implementations CVPR 2023 Nikhil Gosala, Kürsat Petek, Paulo L. J. Drews-Jr, Wolfram Burgard, Abhinav Valada

Implicit supervision trains the model by enforcing spatial consistency of the scene over time based on FV semantic sequences, while explicit supervision exploits BEV pseudolabels generated from FV semantic annotations and self-supervised depth estimates.

Decision Making

Visual Language Maps for Robot Navigation

1 code implementation11 Oct 2022 Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard

Grounding language to the visual observations of a navigating agent can be performed using off-the-shelf visual-language models pretrained on Internet-scale data (e. g., image captions).

3D Reconstruction Image Captioning +1

Uncertainty-aware LiDAR Panoptic Segmentation

1 code implementation10 Oct 2022 Kshitij Sirohi, Sajad Marvi, Daniel Büscher, Wolfram Burgard

Current learning-based methods typically try to achieve maximum performance for this task, while neglecting a proper estimation of the associated uncertainties.

Autonomous Driving Panoptic Segmentation +2

Grounding Language with Visual Affordances over Unstructured Data

1 code implementation4 Oct 2022 Oier Mees, Jessica Borja-Diaz, Wolfram Burgard

Recent works have shown that Large Language Models (LLMs) can be applied to ground natural language to a wide variety of robot skills.

Avg. sequence length Success Rate (5 task-horizon)

Latent Plans for Task-Agnostic Offline Reinforcement Learning

1 code implementation19 Sep 2022 Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard

Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors.

Imitation Learning reinforcement-learning +1

T3VIP: Transformation-based 3D Video Prediction

1 code implementation19 Sep 2022 Iman Nematollahi, Erick Rosete-Beas, Seyed Mahdi B. Azad, Raghu Rajan, Frank Hutter, Wolfram Burgard

To the best of our knowledge, our model is the first generative model that provides an RGB-D video prediction of the future for a static camera.

Hyperparameter Optimization Video Prediction

TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories

no code implementations12 Sep 2022 Jannik Zürn, Sebastian Weber, Wolfram Burgard

Robustly classifying ground infrastructure such as roads and street crossings is an essential task for mobile robots operating alongside pedestrians.

Autonomous Vehicles Semantic Segmentation

USegScene: Unsupervised Learning of Depth, Optical Flow and Ego-Motion with Semantic Guidance and Coupled Networks

no code implementations15 Jul 2022 Johan Vertens, Wolfram Burgard

In this paper we propose USegScene, a framework for semantically guided unsupervised learning of depth, optical flow and ego-motion estimation for stereo camera images using convolutional neural networks.

Motion Estimation Optical Flow Estimation

Uncertainty-aware Panoptic Segmentation

1 code implementation29 Jun 2022 Kshitij Sirohi, Sajad Marvi, Daniel Büscher, Wolfram Burgard

In this work, we introduce the novel task of uncertainty-aware panoptic segmentation, which aims to predict per-pixel semantic and instance segmentations, together with per-pixel uncertainty estimates.

Panoptic Segmentation Scene Understanding +1

What Matters in Language Conditioned Robotic Imitation Learning over Unstructured Data

2 code implementations13 Apr 2022 Oier Mees, Lukas Hermann, Wolfram Burgard

We have open-sourced our implementation to facilitate future research in learning to perform many complex manipulation skills in a row specified with natural language.

Imitation Learning Robot Manipulation

Affordance Learning from Play for Sample-Efficient Policy Learning

1 code implementation1 Mar 2022 Jessica Borja-Diaz, Oier Mees, Gabriel Kalweit, Lukas Hermann, Joschka Boedecker, Wolfram Burgard

Robots operating in human-centered environments should have the ability to understand how objects function: what can be done with each object, where this interaction may occur, and how the object is used to achieve a goal.

Motion Planning Object +1

Self-Supervised Moving Vehicle Detection from Audio-Visual Cues

no code implementations30 Jan 2022 Jannik Zürn, Wolfram Burgard

In extensive experiments carried out with a real-world dataset, we demonstrate that our approach provides accurate detections of moving vehicles and does not require manual annotations.

Contrastive Learning

CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

1 code implementation6 Dec 2021 Oier Mees, Lukas Hermann, Erick Rosete-Beas, Wolfram Burgard

We show that a baseline model based on multi-context imitation learning performs poorly on CALVIN, suggesting that there is significant room for developing innovative agents that learn to relate human language to their world models with this benchmark.

Continuous Control Imitation Learning +3

Robot Skill Adaptation via Soft Actor-Critic Gaussian Mixture Models

no code implementations25 Nov 2021 Iman Nematollahi, Erick Rosete-Beas, Adrian Röfer, Tim Welschehold, Abhinav Valada, Wolfram Burgard

A core challenge for an autonomous agent acting in the real world is to adapt its repertoire of skills to cope with its noisy perception and dynamics.

Robust Monocular Localization in Sparse HD Maps Leveraging Multi-Task Uncertainty Estimation

no code implementations20 Oct 2021 Kürsat Petek, Kshitij Sirohi, Daniel Büscher, Wolfram Burgard

Robust localization in dense urban scenarios using a low-cost sensor setup and sparse HD maps is highly relevant for the current advances in autonomous driving, but remains a challenging topic in research.

Autonomous Driving Semantic Segmentation

Courteous Behavior of Automated Vehicles at Unsignalized Intersections via Reinforcement Learning

no code implementations11 Jun 2021 Shengchao Yan, Tim Welschehold, Daniel Büscher, Wolfram Burgard

Our reinforcement learning agent learns a policy for a centralized controller to let connected autonomous vehicles at unsignalized intersections give up their right of way and yield to other vehicles to optimize traffic flow.

Autonomous Vehicles Collision Avoidance +3

Lane Graph Estimation for Scene Understanding in Urban Driving

1 code implementation1 May 2021 Jannik Zürn, Johan Vertens, Wolfram Burgard

Lane-level scene annotations provide invaluable data in autonomous vehicles for trajectory planning in complex environments such as urban areas and cities.

Autonomous Driving Lane Detection +2

Pre-training of Deep RL Agents for Improved Learning under Domain Randomization

no code implementations29 Apr 2021 Artemij Amiranashvili, Max Argus, Lukas Hermann, Wolfram Burgard, Thomas Brox

Visual domain randomization in simulated environments is a widely used method to transfer policies trained in simulation to real robots.

reinforcement-learning Reinforcement Learning (RL)

Learning to Track with Object Permanence

1 code implementation ICCV 2021 Pavel Tokmakov, Jie Li, Wolfram Burgard, Adrien Gaidon

In this work, we introduce an end-to-end trainable approach for joint object detection and tracking that is capable of such reasoning.

Multi-Object Tracking Object +3

Composing Pick-and-Place Tasks By Grounding Language

2 code implementations16 Feb 2021 Oier Mees, Wolfram Burgard

Controlling robots to perform tasks via natural language is one of the most challenging topics in human-robot interaction.

Natural Language Visual Grounding Robotic Grasping +1

EfficientLPS: Efficient LiDAR Panoptic Segmentation

no code implementations16 Feb 2021 Kshitij Sirohi, Rohit Mohan, Daniel Büscher, Wolfram Burgard, Abhinav Valada

Panoptic segmentation of point clouds is a crucial task that enables autonomous vehicles to comprehend their vicinity using their highly accurate and reliable LiDAR sensors.

Autonomous Vehicles Instance Segmentation +2

Modality-Buffet for Real-Time Object Detection

no code implementations17 Nov 2020 Nicolai Dorka, Johannes Meyer, Wolfram Burgard

Real-time object detection in videos using lightweight hardware is a crucial component of many robotic tasks.

Decision Making Object +3

An Efficient Real-Time NMPC for Quadrotor Position Control under Communication Time-Delay

1 code implementation21 Oct 2020 Barbara Barros Carlos, Tommaso Sartor, Andrea Zanelli, Gianluca Frison, Wolfram Burgard, Moritz Diehl, Giuseppe Oriolo

The advances in computer processor technology have enabled the application of nonlinear model predictive control (NMPC) to agile systems, such as quadrotors.

Robotics Systems and Control Systems and Control Optimization and Control

Holistic Filter Pruning for Efficient Deep Neural Networks

no code implementations17 Sep 2020 Lukas Enderich, Fabian Timm, Wolfram Burgard

Deep neural networks (DNNs) are usually over-parameterized to increase the likelihood of getting adequate initial weights by random initialization.

Driving Through Ghosts: Behavioral Cloning with False Positives

no code implementations29 Aug 2020 Andreas Bühler, Adrien Gaidon, Andrei Cramariuc, Rares Ambrus, Guy Rosman, Wolfram Burgard

In this work, we propose a behavioral cloning approach that can safely leverage imperfect perception without being conservative.

Autonomous Driving

Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion

1 code implementation15 Aug 2020 Igor Vasiljevic, Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Wolfram Burgard, Greg Shakhnarovich, Adrien Gaidon

Self-supervised learning has emerged as a powerful tool for depth and ego-motion estimation, leading to state-of-the-art results on benchmark datasets.

Depth Estimation Motion Estimation +2

PillarFlow: End-to-end Birds-eye-view Flow Estimation for Autonomous Driving

no code implementations3 Aug 2020 Kuan-Hui Lee, Matthew Kliemann, Adrien Gaidon, Jie Li, Chao Fang, Sudeep Pillai, Wolfram Burgard

In autonomous driving, accurately estimating the state of surrounding obstacles is critical for safe and robust path planning.

Autonomous Driving

Hindsight for Foresight: Unsupervised Structured Dynamics Models from Physical Interaction

no code implementations2 Aug 2020 Iman Nematollahi, Oier Mees, Lukas Hermann, Wolfram Burgard

A key challenge for an agent learning to interact with the world is to reason about physical properties of objects and to foresee their dynamics under the effect of applied forces.

Object Optical Flow Estimation +1

Beyond Single Stage Encoder-Decoder Networks: Deep Decoders for Semantic Image Segmentation

no code implementations19 Jul 2020 Gabriel L. Oliveira, Senthil Yogamani, Wolfram Burgard, Thomas Brox

In order to further improve the architecture we introduce a weight function which aims to re-balance classes to increase the attention of the networks to under-represented objects.

Image Segmentation Optical Flow Estimation +2

Scaling Imitation Learning in Minecraft

1 code implementation6 Jul 2020 Artemij Amiranashvili, Nicolai Dorka, Wolfram Burgard, Vladlen Koltun, Thomas Brox

Imitation learning is a powerful family of techniques for learning sensorimotor coordination in immersive environments.

Data Augmentation Imitation Learning

Self-supervised Transfer Learning for Instance Segmentation through Physical Interaction

1 code implementation19 May 2020 Andreas Eitel, Nico Hauff, Wolfram Burgard

To achieve this, we fine-tune an existing DeepMask network for instance segmentation on the self-labeled training data acquired by the robot.

Instance Segmentation Optical Flow Estimation +3

MOPT: Multi-Object Panoptic Tracking

no code implementations17 Apr 2020 Juana Valeria Hurtado, Rohit Mohan, Wolfram Burgard, Abhinav Valada

In this paper, we introduce a novel perception task denoted as multi-object panoptic tracking (MOPT), which unifies the conventionally disjoint tasks of semantic segmentation, instance segmentation, and multi-object tracking.

Instance Segmentation Multi-Object Tracking +4

HeatNet: Bridging the Day-Night Domain Gap in Semantic Segmentation with Thermal Images

no code implementations10 Mar 2020 Johan Vertens, Jannik Zürn, Wolfram Burgard

We avoid the expensive annotation of nighttime images by leveraging an existing daytime RGB-dataset and propose a teacher-student training approach that transfers the dataset's knowledge to the nighttime domain.

Autonomous Driving Camera Calibration +3

Efficiency and Equity are Both Essential: A Generalized Traffic Signal Controller with Deep Reinforcement Learning

no code implementations9 Mar 2020 Shengchao Yan, Jingwei Zhang, Daniel Büscher, Wolfram Burgard

In this paper we present an approach to learning policies for signal controllers using deep reinforcement learning aiming for optimized traffic flow.

SYMOG: learning symmetric mixture of Gaussian modes for improved fixed-point quantization

no code implementations19 Feb 2020 Lukas Enderich, Fabian Timm, Wolfram Burgard

We propose SYMOG (symmetric mixture of Gaussian modes), which significantly decreases the complexity of DNNs through low-bit fixed-point quantization.

Quantization

Learning Object Placements For Relational Instructions by Hallucinating Scene Representations

2 code implementations23 Jan 2020 Oier Mees, Alp Emek, Johan Vertens, Wolfram Burgard

One particular requirement for such robots is that they are able to understand spatial relations and can place objects in accordance with the spatial relations expressed by their user.

Auxiliary Learning Robotic Grasping +2

Self-Supervised Visual Terrain Classification from Unsupervised Acoustic Feature Learning

no code implementations6 Dec 2019 Jannik Zürn, Wolfram Burgard, Abhinav Valada

In this work, we propose a novel terrain classification framework leveraging an unsupervised proprioceptive classifier that learns from vehicle-terrain interaction sounds to self-supervise an exteroceptive classifier for pixel-wise semantic segmentation of images.

Classification General Classification +2

Closed-Form Full Map Posteriors for Robot Localization with Lidar Sensors

no code implementations23 Oct 2019 Lukas Luft, Alexander Schaefer, Tobias Schubert, Wolfram Burgard

A popular class of lidar-based grid mapping algorithms computes for each map cell the probability that it reflects an incident laser beam.

Long-Term Urban Vehicle Localization Using Pole Landmarks Extracted from 3-D Lidar Scans

1 code implementation23 Oct 2019 Alexander Schaefer, Daniel Büscher, Johan Vertens, Lukas Luft, Wolfram Burgard

Due to their ubiquity and long-term stability, pole-like objects are well suited to serve as landmarks for vehicle localization in urban environments.

A Maximum Likelihood Approach to Extract Finite Planes from 3-D Laser Scans

1 code implementation23 Oct 2019 Alexander Schaefer, Johan Vertens, Daniel Büscher, Wolfram Burgard

Whether it is object detection, model reconstruction, laser odometry, or point cloud registration: Plane extraction is a vital component of many robotic systems.

Clustering object-detection +2

DCT Maps: Compact Differentiable Lidar Maps Based on the Cosine Transform

no code implementations23 Oct 2019 Alexander Schaefer, Lukas Luft, Wolfram Burgard

Most robot mapping techniques for lidar sensors tessellate the environment into pixels or voxels and assume uniformity of the environment within them.

Position

An Analytical Lidar Sensor Model Based on Ray Path Information

no code implementations23 Oct 2019 Alexander Schaefer, Lukas Luft, Wolfram Burgard

However, many common lidar models perform poorly in unstructured, unpredictable environments, they lack a consistent physical model for both mapping and localization, and they do not exploit all the information the sensor provides, e. g. out-of-range measurements.

Adversarial Skill Networks: Unsupervised Robot Skill Learning from Video

1 code implementation21 Oct 2019 Oier Mees, Markus Merklinger, Gabriel Kalweit, Wolfram Burgard

Our method learns a general skill embedding independently from the task context by using an adversarial loss.

Continuous Control Metric Learning +4

Adaptive Curriculum Generation from Demonstrations for Sim-to-Real Visuomotor Control

1 code implementation17 Oct 2019 Lukas Hermann, Max Argus, Andreas Eitel, Artemij Amiranashvili, Wolfram Burgard, Thomas Brox

We propose Adaptive Curriculum Generation from Demonstrations (ACGD) for reinforcement learning in the presence of sparse rewards.

Reinforcement Learning (RL)

Learning User Preferences for Trajectories from Brain Signals

no code implementations3 Sep 2019 Henrich Kolkhorst, Wolfram Burgard, Michael Tangermann

Robot motions in the presence of humans should not only be feasible and safe, but also conform to human preferences.

Learning Multimodal Fixed-Point Weights using Gradient Descent

no code implementations16 Jul 2019 Lukas Enderich, Fabian Timm, Lars Rosenbaum, Wolfram Burgard

Due to their high computational complexity, deep neural networks are still limited to powerful processing units.

Quantization

CMRNet: Camera to LiDAR-Map Registration

2 code implementations24 Jun 2019 Daniele Cattaneo, Matteo Vaghi, Augusto Luis Ballardini, Simone Fontana, Domenico Giorgio Sorrenti, Wolfram Burgard

In this paper we present CMRNet, a realtime approach based on a Convolutional Neural Network to localize an RGB image of a scene in a map built from LiDAR data.

Camera Localization

DeepTemporalSeg: Temporally Consistent Semantic Segmentation of 3D LiDAR Scans

1 code implementation17 Jun 2019 Ayush Dewan, Wolfram Burgard

To make the predictions from the DCNN temporally consistent, we propose a Bayes filter based method.

Semantic Segmentation

Vision-Based Autonomous UAV Navigation and Landing for Urban Search and Rescue

no code implementations4 Jun 2019 Mayank Mittal, Rohit Mohan, Wolfram Burgard, Abhinav Valada

This problem is extremely challenging as pre-existing maps cannot be leveraged for navigation due to structural changes that may have occurred.

Navigate

Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration

no code implementations18 Mar 2019 Jingwei Zhang, Niklas Wetzel, Nicolai Dorka, Joschka Boedecker, Wolfram Burgard

Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration.

Learning a Local Feature Descriptor for 3D LiDAR Scans

no code implementations20 Sep 2018 Ayush Dewan, Tim Caselitz, Wolfram Burgard

Our proposed architecture consists of a Siamese network for learning a feature descriptor and a metric learning network for matching the descriptors.

Metric Learning

Vision-based Autonomous Landing in Catastrophe-Struck Environments

no code implementations15 Sep 2018 Mayank Mittal, Abhinav Valada, Wolfram Burgard

However, these UAVs have to be able to autonomously land on debris piles in order to accurately locate the survivors.

Robotics

Multimodal Interaction-aware Motion Prediction for Autonomous Street Crossing

no code implementations21 Aug 2018 Noha Radwan, Wolfram Burgard, Abhinav Valada

Learned representations from the traffic light recognition stream are fused with the estimated trajectories from the motion prediction stream to learn the crossing decision.

motion prediction

Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

1 code implementation11 Aug 2018 Abhinav Valada, Rohit Mohan, Wolfram Burgard

To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner.

Scene Recognition Semantic Segmentation

Intracranial Error Detection via Deep Learning

no code implementations4 May 2018 Martin Völker, Jiří Hammer, Robin T. Schirrmeister, Joos Behncke, Lukas D. J. Fiederer, Andreas Schulze-Bonhage, Petr Marusič, Wolfram Burgard, Tonio Ball

Deep learning techniques have revolutionized the field of machine learning and were recently successfully applied to various classification problems in noninvasive electroencephalography (EEG).

EEG

VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry

no code implementations23 Apr 2018 Noha Radwan, Abhinav Valada, Wolfram Burgard

Semantic understanding and localization are fundamental enablers of robot autonomy that have for the most part been tackled as disjoint problems.

Outdoor Localization Scene Understanding +1

The Limits and Potentials of Deep Learning for Robotics

no code implementations18 Apr 2018 Niko Sünderhauf, Oliver Brock, Walter Scheirer, Raia Hadsell, Dieter Fox, Jürgen Leitner, Ben Upcroft, Pieter Abbeel, Wolfram Burgard, Michael Milford, Peter Corke

In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning.

Robotics

Deep Spatiotemporal Models for Robust Proprioceptive Terrain Classification

no code implementations2 Apr 2018 Abhinav Valada, Wolfram Burgard

Terrain classification is a critical component of any autonomous mobile robot system operating in unknown real-world environments.

Classification General Classification

Deep Auxiliary Learning for Visual Localization and Odometry

1 code implementation9 Mar 2018 Abhinav Valada, Noha Radwan, Wolfram Burgard

We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation.

Auxiliary Learning Visual Localization +1

3D Human Pose Estimation in RGBD Images for Robotic Task Learning

1 code implementation7 Mar 2018 Christian Zimmermann, Tim Welschehold, Christian Dornhege, Wolfram Burgard, Thomas Brox

We propose an approach to estimate 3D human pose in real world units from a single RGBD image and show that it exceeds performance of monocular 3D pose estimation approaches from color as well as pose estimation exclusively from depth.

3D Human Pose Estimation 3D Pose Estimation

VR-Goggles for Robots: Real-to-sim Domain Adaptation for Visual Control

no code implementations1 Feb 2018 Jingwei Zhang, Lei Tai, Peng Yun, Yufeng Xiong, Ming Liu, Joschka Boedecker, Wolfram Burgard

In this paper, we deal with the reality gap from a novel perspective, targeting transferring Deep Reinforcement Learning (DRL) policies learned in simulated environments to the real-world domain for visual control tasks.

Domain Adaptation Style Transfer

The signature of robot action success in EEG signals of a human observer: Decoding and visualization using deep convolutional neural networks

no code implementations16 Nov 2017 Joos Behncke, Robin Tibor Schirrmeister, Wolfram Burgard, Tonio Ball

Analysis of brain signals from a human interacting with a robot may help identifying robot errors, but accuracies of such analyses have still substantial space for improvement.

EEG Eeg Decoding +1

Deep Transfer Learning for Error Decoding from Non-Invasive EEG

no code implementations25 Oct 2017 Martin Völker, Robin T. Schirrmeister, Lukas D. J. Fiederer, Wolfram Burgard, Tonio Ball

We recorded high-density EEG in a flanker task experiment (31 subjects) and an online BCI control paradigm (4 subjects).

EEG Transfer Learning

Socially Compliant Navigation through Raw Depth Inputs with Generative Adversarial Imitation Learning

1 code implementation6 Oct 2017 Lei Tai, Jingwei Zhang, Ming Liu, Wolfram Burgard

Experiments show that our GAIL-based approach greatly improves the safety and efficiency of the behavior of mobile robots from pure behavior cloning.

Autonomous Vehicles Imitation Learning +1

From Plants to Landmarks: Time-invariant Plant Localization that uses Deep Pose Regression in Agricultural Fields

no code implementations14 Sep 2017 Florian Kraemer, Alexander Schaefer, Andreas Eitel, Johan Vertens, Wolfram Burgard

Agricultural robots are expected to increase yields in a sustainable way and automate precision tasks, such as weeding and plant monitoring.

regression

Brain Responses During Robot-Error Observation

no code implementations4 Aug 2017 Dominik Welke, Joos Behncke, Marina Hader, Robin Tibor Schirrmeister, Andreas Schönau, Boris Eßmann, Oliver Müller, Wolfram Burgard, Tonio Ball

Our findings suggest that non-invasive recordings of brain responses elicited when observing robots indeed contain decodable information about the correctness of the robot's action and the type of observed robot.

EEG

Deep Detection of People and their Mobility Aids for a Hospital Robot

no code implementations2 Aug 2017 Andres Vasquez, Marina Kollmitz, Andreas Eitel, Wolfram Burgard

In this paper, we propose a depth-based perception pipeline that estimates the position and velocity of people in the environment and categorizes them according to the mobility aids they use: pedestrian, person in wheelchair, person in a wheelchair with a person pushing them, person with crutches and person using a walker.

object-detection Object Detection +2

Learning to Singulate Objects using a Push Proposal Network

no code implementations25 Jul 2017 Andreas Eitel, Nico Hauff, Wolfram Burgard

We present a novel neural network-based approach that separates unknown objects in clutter by selecting favourable push actions.

Choosing Smartly: Adaptive Multimodal Fusion for Object Detection in Changing Environments

1 code implementation18 Jul 2017 Oier Mees, Andreas Eitel, Wolfram Burgard

Object detection is an essential task for autonomous robots operating in dynamic and changing environments.

object-detection Object Detection

Optimization Beyond the Convolution: Generalizing Spatial Relations with End-to-End Metric Learning

1 code implementation4 Jul 2017 Philipp Jund, Andreas Eitel, Nichola Abdo, Wolfram Burgard

To operate intelligently in domestic environments, robots require the ability to understand arbitrary spatial relations between objects and to generalize them to objects of varying sizes and shapes.

Metric Learning

Neural SLAM: Learning to Explore with External Memory

1 code implementation29 Jun 2017 Jingwei Zhang, Lei Tai, Ming Liu, Joschka Boedecker, Wolfram Burgard

We present an approach for agents to learn representations of a global map from sensor data, to aid their exploration in new environments.

Reinforcement Learning (RL) Simultaneous Localization and Mapping

Topometric Localization with Deep Learning

no code implementations27 Jun 2017 Gabriel L. Oliveira, Noha Radwan, Wolfram Burgard, Thomas Brox

Compared to LiDAR-based localization methods, which provide high accuracy but rely on expensive sensors, visual localization approaches only require a camera and thus are more cost-effective while their accuracy and reliability typically is inferior to LiDAR-based methods.

Visual Localization Visual Odometry

Deep Semantic Classification for 3D LiDAR Data

no code implementations26 Jun 2017 Ayush Dewan, Gabriel L. Oliveira, Wolfram Burgard

To learn the distinction between movable and non-movable points in the environment, we introduce an approach based on deep neural network and for detecting the dynamic points, we estimate pointwise motion.

Classification General Classification

Deep learning with convolutional neural networks for EEG decoding and visualization

5 code implementations15 Mar 2017 Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, Tonio Ball

PLEASE READ AND CITE THE REVISED VERSION at Human Brain Mapping: http://onlinelibrary. wiley. com/doi/10. 1002/hbm. 23730/full Code available here: https://github. com/robintibor/braindecode

EEG Eeg Decoding

Metric Learning for Generalizing Spatial Relations to New Objects

1 code implementation6 Mar 2017 Oier Mees, Nichola Abdo, Mladen Mazuran, Wolfram Burgard

Human-centered environments are rich with a wide variety of spatial relations between everyday objects.

Metric Learning

A Survey of Deep Network Solutions for Learning Control in Robotics: From Reinforcement to Imitation

1 code implementation21 Dec 2016 Lei Tai, Jingwei Zhang, Ming Liu, Joschka Boedecker, Wolfram Burgard

We carry out our discussions on the two main paradigms for learning control with deep networks: deep reinforcement learning and imitation learning.

Imitation Learning reinforcement-learning +1

Deep Reinforcement Learning with Successor Features for Navigation across Similar Environments

no code implementations16 Dec 2016 Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, Wolfram Burgard

We propose a successor feature based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances.

reinforcement-learning Reinforcement Learning (RL) +1

The Freiburg Groceries Dataset

2 code implementations17 Nov 2016 Philipp Jund, Nichola Abdo, Andreas Eitel, Wolfram Burgard

In this paper, we address this issue and present a dataset consisting of 5, 000 images covering 25 different classes of groceries, with at least 97 images per class.

Benchmarking BIG-bench Machine Learning +1

Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics

no code implementations13 Apr 2016 Michael Herman, Tobias Gindele, Jörg Wagner, Felix Schmitt, Wolfram Burgard

Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent.

reinforcement-learning Reinforcement Learning (RL) +1

Monte Carlo Localization in Hand-Drawn Maps

no code implementations2 Apr 2015 Bahram Behzadian, Pratik Agarwal, Wolfram Burgard, Gian Diego Tipaldi

In this paper, we address the localization problem when the map of the environment is not present beforehand, and the robot relies on a hand-drawn map from a non-expert user.

Metric Localization using Google Street View

no code implementations14 Mar 2015 Pratik Agarwal, Wolfram Burgard, Luciano Spinello

In this paper, we present a novel approach that instead uses geotagged panoramas from the Google Street View as a source of global positioning.

Fast and Robust Feature Matching for RGB-D Based Localization

no code implementations2 Feb 2015 Miguel Heredia, Felix Endres, Wolfram Burgard, Rafael Sanz

In this paper we present a novel approach to global localization using an RGB-D camera in maps of visual features.

Visual Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.