Search Results for author: Florian Piewak

Found 11 papers, 1 papers with code

Hierarchical Insights: Exploiting Structural Similarities for Reliable 3D Semantic Segmentation

no code implementations9 Apr 2024 Mariella Dreissig, Florian Piewak, Joschka Boedecker

Safety-critical applications like autonomous driving call for robust 3D environment perception algorithms which can withstand highly diverse and ambiguous surroundings.

3D Semantic Segmentation Autonomous Driving +2

On the Calibration of Uncertainty Estimation in LiDAR-based Semantic Segmentation

no code implementations4 Aug 2023 Mariella Dreissig, Florian Piewak, Joschka Boedecker

We propose a metric to measure the confidence calibration quality of a semantic segmentation model with respect to individual classes.

Autonomous Driving Segmentation +1

Survey on LiDAR Perception in Adverse Weather Conditions

no code implementations13 Apr 2023 Mariella Dreissig, Dominik Scheuble, Florian Piewak, Joschka Boedecker

The active LiDAR sensor is able to create an accurate 3D representation of a scene, making it a valuable addition for environment perception for autonomous vehicles.

Autonomous Vehicles Denoising +1

CNN-based Lidar Point Cloud De-Noising in Adverse Weather

2 code implementations9 Dec 2019 Robin Heinzler, Florian Piewak, Philipp Schindler, Wilhelm Stork

Lidar sensors are frequently used in environment perception for autonomous vehicles and mobile robotics to complement camera, radar, and ultrasonic sensors.

Autonomous Vehicles Scene Understanding

Analyzing the Cross-Sensor Portability of Neural Network Architectures for LiDAR-based Semantic Labeling

no code implementations3 Jul 2019 Florian Piewak, Peter Pinggera, Marius Zöllner

In this paper we propose a new CNN architecture for the point-wise semantic labeling of LiDAR data which achieves state-of-the-art results while increasing portability across sensor types.

Improved Semantic Stixels via Multimodal Sensor Fusion

no code implementations24 Sep 2018 Florian Piewak, Peter Pinggera, Markus Enzweiler, David Pfeiffer, Marius Zöllner

Our results indicate that the proposed mid-level fusion of LiDAR and camera data improves both the geometric and semantic accuracy of the Stixel model significantly while reducing the computational overhead as well as the amount of generated data in comparison to using a single modality on its own.

Sensor Fusion

Boosting LiDAR-based Semantic Labeling by Cross-Modal Training Data Generation

no code implementations26 Apr 2018 Florian Piewak, Peter Pinggera, Manuel Schäfer, David Peter, Beate Schwarz, Nick Schneider, David Pfeiffer, Markus Enzweiler, Marius Zöllner

The effectiveness of the proposed network architecture as well as the automated data generation process is demonstrated on a manually annotated ground truth dataset.

Autonomous Vehicles

Fully Convolutional Neural Networks for Dynamic Object Detection in Grid Maps (Masters Thesis)

no code implementations10 Sep 2017 Florian Piewak

The orientation extraction based on the Convolutional Neural Network shows a better performance in comparison to an orientation extraction directly over the velocity information of the dynamic occupancy grid map.

Autonomous Vehicles object-detection +1

Fully Convolutional Neural Networks for Dynamic Object Detection in Grid Maps

no code implementations10 Sep 2017 Florian Piewak, Timo Rehfeld, Michael Weber, J. Marius Zöllner

Grid maps are widely used in robotics to represent obstacles in the environment and differentiating dynamic objects from static infrastructure is essential for many practical applications.

object-detection Object Detection

RegNet: Multimodal Sensor Registration Using Deep Neural Networks

no code implementations11 Jul 2017 Nick Schneider, Florian Piewak, Christoph Stiller, Uwe Franke

In this paper, we present RegNet, the first deep convolutional neural network (CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between multimodal sensors, exemplified using a scanning LiDAR and a monocular camera.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.