Our formulation is able to capture global context in a video, thus robust to temporal content change.
In this paper, we address the problem of adaptive path planning for accurate semantic segmentation of terrain using unmanned aerial vehicles (UAVs).
In this paper, we propose 4D panoptic LiDAR segmentation to assign a semantic class and a temporally-consistent instance ID to a sequence of 3D points.
Real-time semantic segmentation of LiDAR data is crucial for autonomously driving vehicles, which are usually equipped with an embedded platform and have limited computational resources.
Ranked #2 on Real-Time 3D Semantic Segmentation on SemanticKITTI
We integrate both into stereo estimation as well as visual odometry systems and show clear benefits for typical disparity and direct image registration tasks when using our proposed metric.
Panoptic segmentation is the recently introduced task that tackles semantic segmentation and instance segmentation jointly.
Perception in autonomous vehicles is often carried out through a suite of different sensing modalities.
Ranked #18 on 3D Semantic Segmentation on SemanticKITTI
For localization and mapping, we employ an efficient direct tracking on the truncated signed distance function (TSDF) and leverage color information encoded in the TSDF to estimate the pose of the sensor.
Despite the relevance of semantic scene understanding for this application, there is a lack of a large dataset for this task which is based on an automotive LiDAR.
Ranked #19 on 3D Semantic Segmentation on SemanticKITTI
Exploiting the crop arrangement information that is observable from the image sequences enables our system to robustly estimate a pixel-wise labeling of the images into crop and weed, i. e., a semantic segmentation.
It outputs the stem location for weeds, which allows for mechanical treatments, and the covered area of the weed for selective spraying.
Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions.
Our approach exploits the different cues in a natural and consistent way and the registration can be done at framerate for a typical range or imaging sensor.