Search Results for author: Jingwei Ji

Found 17 papers, 2 papers with code

EMMA: End-to-End Multimodal Model for Autonomous Driving

no code implementations30 Oct 2024 Jyh-Jing Hwang, Runsheng Xu, Hubert Lin, Wei-Chih Hung, Jingwei Ji, Kristy Choi, Di Huang, Tong He, Paul Covington, Benjamin Sapp, Yin Zhou, James Guo, Dragomir Anguelov, Mingxing Tan

We show that co-training EMMA with planner trajectories, object detection, and road graph tasks yields improvements across all three domains, highlighting EMMA's potential as a generalist model for autonomous driving applications.

3D Object Detection Autonomous Driving +4

Multi-Task Dynamic Pricing in Credit Market with Contextual Information

no code implementations18 Oct 2024 Adel Javanmard, Jingwei Ji, Renyuan Xu

We show that the regret of our policy is better than both the policy that treats each security individually and the policy that treats all securities as the same.

MoST: Multi-modality Scene Tokenization for Motion Prediction

no code implementations CVPR 2024 Norman Mu, Jingwei Ji, Zhenpei Yang, Nate Harada, Haotian Tang, Kan Chen, Charles R. Qi, Runzhou Ge, Kratarth Goel, Zoey Yang, Scott Ettinger, Rami Al-Rfou, Dragomir Anguelov, Yin Zhou

This symbolic representation is a high-level abstraction of the real world, which may render the motion prediction model vulnerable to perception errors (e. g., failures in detecting open-vocabulary obstacles) while missing salient information from the scene context (e. g., poor road conditions).

General Knowledge motion prediction

Unsupervised 3D Perception with 2D Vision-Language Distillation for Autonomous Driving

no code implementations ICCV 2023 Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov

Closed-set 3D perception models trained on only a pre-defined set of object categories can be inadequate for safety critical applications such as autonomous driving where new object types can be encountered after deployment.

Autonomous Driving Knowledge Distillation

3D Human Keypoints Estimation From Point Clouds in the Wild Without Human Labels

no code implementations CVPR 2023 Zhenzhen Weng, Alexander S. Gorban, Jingwei Ji, Mahyar Najibi, Yin Zhou, Dragomir Anguelov

We show that by training on a large training set from Waymo Open Dataset without any human annotated keypoints, we are able to achieve reasonable performance as compared to the fully supervised approach.

Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving

no code implementations14 Oct 2022 Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov

Learning-based perception and prediction modules in modern autonomous driving systems typically rely on expensive human annotation and are designed to perceive only a handful of predefined object categories.

Autonomous Driving Trajectory Prediction

Risk-Aware Linear Bandits: Theory and Applications in Smart Order Routing

no code implementations4 Aug 2022 Jingwei Ji, Renyuan Xu, Ruihao Zhu

Then, we rigorously analyze their near-optimal regret upper bounds to show that, by leveraging the linear structure, our algorithms can dramatically reduce the regret when compared to existing methods.

Decision Making

Action Genome: Actions As Compositions of Spatio-Temporal Scene Graphs

no code implementations CVPR 2020 Jingwei Ji, Ranjay Krishna, Li Fei-Fei, Juan Carlos Niebles

Next, by decomposing and learning the temporal changes in visual relationships that result in an action, we demonstrate the utility of a hierarchical event decomposition by enabling few-shot action recognition, achieving 42. 7% mAP using as few as 10 examples.

Few-Shot action recognition Few Shot Action Recognition +1

Action Genome: Actions as Composition of Spatio-temporal Scene Graphs

2 code implementations15 Dec 2019 Jingwei Ji, Ranjay Krishna, Li Fei-Fei, Juan Carlos Niebles

Next, by decomposing and learning the temporal changes in visual relationships that result in an action, we demonstrate the utility of a hierarchical event decomposition by enabling few-shot action recognition, achieving 42. 7% mAP using as few as 10 examples.

Few-Shot action recognition Few Shot Action Recognition +1

Learning Temporal Action Proposals With Fewer Labels

no code implementations ICCV 2019 Jingwei Ji, Kaidi Cao, Juan Carlos Niebles

Most current methods for training action proposal modules rely on fully supervised approaches that require large amounts of annotated temporal action intervals in long video sequences.

Action Detection Semi-Supervised Action Detection

DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

no code implementations11 Aug 2017 Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak, Christopher Choy, Silvio Savarese

We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins.

3D Reconstruction 3D Shape Reconstruction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.