Search Results for author: Reza Mahjourian

Found 10 papers, 2 papers with code

Instance Segmentation with Cross-Modal Consistency

no code implementations14 Oct 2022 Alex Zihao Zhu, Vincent Casser, Reza Mahjourian, Henrik Kretzschmar, Sören Pirk

We demonstrate that this formulation encourages the models to learn embeddings that are invariant to viewpoint variations and consistent across sensor modalities.

Autonomous Driving Contrastive Learning +4

StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving

no code implementations2 Jun 2022 Jinkyu Kim, Reza Mahjourian, Scott Ettinger, Mayank Bansal, Brandyn White, Ben Sapp, Dragomir Anguelov

A whole-scene sparse input representation allows StopNet to scale to predicting trajectories for hundreds of road agents with reliable latency.

Motion Forecasting

Revisiting Multi-Scale Feature Fusion for Semantic Segmentation

no code implementations23 Mar 2022 Tianjian Meng, Golnaz Ghiasi, Reza Mahjourian, Quoc V. Le, Mingxing Tan

It is commonly believed that high internal resolution combined with expensive operations (e. g. atrous convolutions) are necessary for accurate semantic segmentation, resulting in slow speed and large memory usage.

Segmentation Semantic Segmentation

Occupancy Flow Fields for Motion Forecasting in Autonomous Driving

no code implementations8 Mar 2022 Reza Mahjourian, Jinkyu Kim, Yuning Chai, Mingxing Tan, Ben Sapp, Dragomir Anguelov

We propose Occupancy Flow Fields, a new representation for motion forecasting of multiple agents, an important task in autonomous driving.

Motion Estimation Motion Forecasting

Future Segmentation Using 3D Structure

no code implementations28 Nov 2018 Suhani Vora, Reza Mahjourian, Soeren Pirk, Anelia Angelova

Predicting the future to anticipate the outcome of events and actions is a critical attribute of autonomous agents; particularly for agents which must rely heavily on real time visual data for decision making.

Attribute Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.