no code implementations • 9 Apr 2024 • Mariella Dreissig, Florian Piewak, Joschka Boedecker
Safety-critical applications like autonomous driving call for robust 3D environment perception algorithms which can withstand highly diverse and ambiguous surroundings.
no code implementations • 4 Aug 2023 • Mariella Dreissig, Florian Piewak, Joschka Boedecker
We propose a metric to measure the confidence calibration quality of a semantic segmentation model with respect to individual classes.
no code implementations • 13 Apr 2023 • Mariella Dreissig, Dominik Scheuble, Florian Piewak, Joschka Boedecker
The active LiDAR sensor is able to create an accurate 3D representation of a scene, making it a valuable addition for environment perception for autonomous vehicles.
no code implementations • 13 Oct 2022 • Mariella Dreissig, Florian Piewak, Joschka Boedecker
The calibration of deep learning-based perception models plays a crucial role in their reliability.
2 code implementations • 9 Dec 2019 • Robin Heinzler, Florian Piewak, Philipp Schindler, Wilhelm Stork
Lidar sensors are frequently used in environment perception for autonomous vehicles and mobile robotics to complement camera, radar, and ultrasonic sensors.
no code implementations • 3 Jul 2019 • Florian Piewak, Peter Pinggera, Marius Zöllner
In this paper we propose a new CNN architecture for the point-wise semantic labeling of LiDAR data which achieves state-of-the-art results while increasing portability across sensor types.
no code implementations • 24 Sep 2018 • Florian Piewak, Peter Pinggera, Markus Enzweiler, David Pfeiffer, Marius Zöllner
Our results indicate that the proposed mid-level fusion of LiDAR and camera data improves both the geometric and semantic accuracy of the Stixel model significantly while reducing the computational overhead as well as the amount of generated data in comparison to using a single modality on its own.
no code implementations • 26 Apr 2018 • Florian Piewak, Peter Pinggera, Manuel Schäfer, David Peter, Beate Schwarz, Nick Schneider, David Pfeiffer, Markus Enzweiler, Marius Zöllner
The effectiveness of the proposed network architecture as well as the automated data generation process is demonstrated on a manually annotated ground truth dataset.
no code implementations • 10 Sep 2017 • Florian Piewak
The orientation extraction based on the Convolutional Neural Network shows a better performance in comparison to an orientation extraction directly over the velocity information of the dynamic occupancy grid map.
no code implementations • 10 Sep 2017 • Florian Piewak, Timo Rehfeld, Michael Weber, J. Marius Zöllner
Grid maps are widely used in robotics to represent obstacles in the environment and differentiating dynamic objects from static infrastructure is essential for many practical applications.
no code implementations • 11 Jul 2017 • Nick Schneider, Florian Piewak, Christoph Stiller, Uwe Franke
In this paper, we present RegNet, the first deep convolutional neural network (CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between multimodal sensors, exemplified using a scanning LiDAR and a monocular camera.