Search Results for author: Jingwei Ji

Found 14 papers, 2 papers with code

Unsupervised 3D Perception with 2D Vision-Language Distillation for Autonomous Driving

no code implementations ICCV 2023 Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov

Closed-set 3D perception models trained on only a pre-defined set of object categories can be inadequate for safety critical applications such as autonomous driving where new object types can be encountered after deployment.

Autonomous Driving Knowledge Distillation

3D Human Keypoints Estimation From Point Clouds in the Wild Without Human Labels

no code implementations CVPR 2023 Zhenzhen Weng, Alexander S. Gorban, Jingwei Ji, Mahyar Najibi, Yin Zhou, Dragomir Anguelov

We show that by training on a large training set from Waymo Open Dataset without any human annotated keypoints, we are able to achieve reasonable performance as compared to the fully supervised approach.

Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving

no code implementations14 Oct 2022 Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov

Learning-based perception and prediction modules in modern autonomous driving systems typically rely on expensive human annotation and are designed to perceive only a handful of predefined object categories.

Autonomous Driving Trajectory Prediction

Risk-Aware Linear Bandits: Theory and Applications in Smart Order Routing

no code implementations4 Aug 2022 Jingwei Ji, Renyuan Xu, Ruihao Zhu

Then, we rigorously analyze their near-optimal regret upper bounds to show that, by leveraging the linear structure, our algorithms can dramatically reduce the regret when compared to existing methods.

Decision Making

Action Genome: Actions As Compositions of Spatio-Temporal Scene Graphs

no code implementations CVPR 2020 Jingwei Ji, Ranjay Krishna, Li Fei-Fei, Juan Carlos Niebles

Next, by decomposing and learning the temporal changes in visual relationships that result in an action, we demonstrate the utility of a hierarchical event decomposition by enabling few-shot action recognition, achieving 42. 7% mAP using as few as 10 examples.

Few-Shot action recognition Few Shot Action Recognition +1

Action Genome: Actions as Composition of Spatio-temporal Scene Graphs

1 code implementation15 Dec 2019 Jingwei Ji, Ranjay Krishna, Li Fei-Fei, Juan Carlos Niebles

Next, by decomposing and learning the temporal changes in visual relationships that result in an action, we demonstrate the utility of a hierarchical event decomposition by enabling few-shot action recognition, achieving 42. 7% mAP using as few as 10 examples.

Few-Shot action recognition Few Shot Action Recognition +1

Learning Temporal Action Proposals With Fewer Labels

no code implementations ICCV 2019 Jingwei Ji, Kaidi Cao, Juan Carlos Niebles

Most current methods for training action proposal modules rely on fully supervised approaches that require large amounts of annotated temporal action intervals in long video sequences.

Action Detection Semi-Supervised Action Detection

DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

no code implementations11 Aug 2017 Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak, Christopher Choy, Silvio Savarese

We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins.

3D Reconstruction 3D Shape Reconstruction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.