Bird's-Eye View Semantic Segmentation
6 papers with code • 1 benchmarks • 2 datasets
Most implemented papers
CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers
The extensive experiments on the V2V perception dataset, OPV2V, demonstrate that CoBEVT achieves state-of-the-art performance for cooperative BEV semantic segmentation.
LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery
km with resolution 50 cm per pixel and 176. 76 sq.
FIERY: Future Instance Prediction in Bird's-Eye View from Surround Monocular Cameras
We present FIERY: a probabilistic future prediction model in bird's-eye view from monocular cameras.
Cross-view Transformers for real-time Map-view Semantic Segmentation
The architecture consists of a convolutional image encoder for each view and cross-view transformer layers to infer a map-view semantic segmentation.
LaRa: Latents and Rays for Multi-Camera Bird's-Eye-View Semantic Segmentation
Recent works in autonomous driving have widely adopted the bird's-eye-view (BEV) semantic map as an intermediate representation of the world.
Model-Based Imitation Learning for Urban Driving
Our approach is the first camera-only method that models static scene, dynamic scene, and ego-behaviour in an urban driving environment.