no code implementations • ICLR 2019 • Mayank Bansal, Alex Krizhevsky, Abhijit Ogale
Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle.
no code implementations • 29 Jan 2024 • Ketul Shah, Robert Crandall, Jie Xu, Peng Zhou, Marian George, Mayank Bansal, Rama Chellappa
We report state-of-the-art results on the NTU-60, NTU-120 and ETRI datasets, as well as in the transfer learning setting on NUCLA, PKU-MMD-II and ROCOG-v2 datasets, demonstrating the robustness of our approach.
no code implementations • 2 Jun 2022 • Jinkyu Kim, Reza Mahjourian, Scott Ettinger, Mayank Bansal, Brandyn White, Ben Sapp, Dragomir Anguelov
A whole-scene sparse input representation allows StopNet to scale to predicting trajectories for hundreds of road agents with reliable latency.
no code implementations • 8 May 2020 • Jinkyu Kim, Mayank Bansal
Deep neural networks are a key component of behavior prediction and motion generation for self-driving cars.
1 code implementation • 12 Oct 2019 • Yuning Chai, Benjamin Sapp, Mayank Bansal, Dragomir Anguelov
Predicting human behavior is a difficult and crucial task required for motion planning.
Ranked #2 on Trajectory Prediction on PAID
4 code implementations • 7 Dec 2018 • Mayank Bansal, Alex Krizhevsky, Abhijit Ogale
Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle.
no code implementations • CVPR 2014 • Mayank Bansal, Kostas Daniilidis
We propose a purely geometric correspondence-free approach to urban geo-localization using 3D point-ray features extracted from the Digital Elevation Map of an urban environment.
no code implementations • 24 May 2014 • Mayank Bansal, Kostas Daniilidis
In this paper, we address the problem of finding correspondences in the absence of unary or pairwise constraints as it emerges in problems where unary appearance similarity like SIFT matches is not available.
no code implementations • CVPR 2013 • Mayank Bansal, Kostas Daniilidis
We address the problem of matching images with disparate appearance arising from factors like dramatic illumination (day vs. night), age (historic vs. new) and rendering style differences.