Search Results for author: Scott Ettinger

Found 8 papers, 2 papers with code

StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving

no code implementations2 Jun 2022 Jinkyu Kim, Reza Mahjourian, Scott Ettinger, Mayank Bansal, Brandyn White, Ben Sapp, Dragomir Anguelov

A whole-scene sparse input representation allows StopNet to scale to predicting trajectories for hundreds of road agents with reliable latency.

Motion Forecasting

CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal Relationships

1 code implementation7 Jul 2022 Rebecca Roelofs, Liting Sun, Ben Caine, Khaled S. Refaat, Ben Sapp, Scott Ettinger, Wei Chai

Finally, we release the causal agent labels (at https://github. com/google-research/causal-agents) as an additional attribute to WOMD and the robustness benchmarks to aid the community in building more reliable and safe deep-learning models for motion forecasting.

Attribute Autonomous Vehicles +1

Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving

no code implementations14 Oct 2022 Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov

Learning-based perception and prediction modules in modern autonomous driving systems typically rely on expensive human annotation and are designed to perceive only a handful of predefined object categories.

Autonomous Driving Trajectory Prediction

WOMD-LiDAR: Raw Sensor Dataset Benchmark for Motion Forecasting

no code implementations7 Apr 2023 Kan Chen, Runzhou Ge, Hang Qiu, Rami Ai-Rfou, Charles R. Qi, Xuanyu Zhou, Zoey Yang, Scott Ettinger, Pei Sun, Zhaoqi Leng, Mustafa Baniodeh, Ivan Bogun, Weiyue Wang, Mingxing Tan, Dragomir Anguelov

To study the effect of these modular approaches, design new paradigms that mitigate these limitations, and accelerate the development of end-to-end motion forecasting models, we augment the Waymo Open Motion Dataset (WOMD) with large-scale, high-quality, diverse LiDAR data for the motion forecasting task.

Motion Forecasting

Unsupervised 3D Perception with 2D Vision-Language Distillation for Autonomous Driving

no code implementations ICCV 2023 Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov

Closed-set 3D perception models trained on only a pre-defined set of object categories can be inadequate for safety critical applications such as autonomous driving where new object types can be encountered after deployment.

Autonomous Driving Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.