no code implementations • 16 Oct 2024 • Zhimin Chen, Liang Yang, Yingwei Li, Longlong Jing, Bing Li
Foundation models have significantly enhanced 2D task performance, and recent works like Bridge3D have successfully applied these models to improve 3D scene understanding through knowledge distillation, marking considerable advancements.
no code implementations • 30 Apr 2024 • Longlong Jing, Ruichi Yu, Xu Chen, Zhengli Zhao, Shiwei Sheng, Colin Graber, Qi Chen, Qinru Li, Shangxuan Wu, Han Deng, Sangjin Lee, Chris Sweeney, Qiurui He, Wei-Chih Hung, Tong He, Xingyi Zhou, Farshid Moussavi, Zijian Guo, Yin Zhou, Mingxing Tan, Weilong Yang, CongCong Li
In this paper, we propose STT, a Stateful Tracking model built with Transformers, that can consistently track objects in the scenes while also predicting their states accurately.
no code implementations • 4 Jan 2024 • Zihao Xiao, Longlong Jing, Shangxuan Wu, Alex Zihao Zhu, Jingwei Ji, Chiyu Max Jiang, Wei-Chih Hung, Thomas Funkhouser, Weicheng Kuo, Anelia Angelova, Yin Zhou, Shiwei Sheng
3D panoptic segmentation is a challenging perception task, especially in autonomous driving.
1 code implementation • 17 Nov 2023 • Zhimin Chen, Yingwei Li, Longlong Jing, Liang Yang, Bing Li
However, a notable limitation of these approaches is that they do not fully utilize the multi-view attributes inherent in 3D point clouds, which is crucial for a deeper understanding of 3D structures.
1 code implementation • NeurIPS 2023 • Zhimin Chen, Longlong Jing, Yingwei Li, Bing Li
Foundation models have achieved remarkable results in 2D and language tasks like image segmentation, object detection, and visual-language understanding.
no code implementations • 7 Dec 2022 • Siwei Yang, Longlong Jing, Junfei Xiao, Hang Zhao, Alan Yuille, Yingwei Li
Through systematic analysis, we found that the commonly used pairwise affinity loss has two limitations: (1) it works with color affinity but leads to inferior performance with other modalities such as depth gradient, (2)the original affinity loss does not prevent trivial predictions as intended but actually accelerates this process due to the affinity loss term being symmetric.
2 code implementations • 18 Oct 2022 • Zhimin Chen, Longlong Jing, Liang Yang, Yingwei Li, Bing Li
Firstly, a dynamic thresholding strategy is proposed to utilize more unlabeled data, especially for low learning status classes.
no code implementations • ICLR 2022 • Yingwei Li, Tiffany Chen, Maya Kabkab, Ruichi Yu, Longlong Jing, Yurong You, Hang Zhao
An edge in the graph encodes the relative distance information between a pair of target and reference objects.
no code implementations • 8 Jun 2022 • Longlong Jing, Ruichi Yu, Henrik Kretzschmar, Kang Li, Charles R. Qi, Hang Zhao, Alper Ayvaci, Xu Chen, Dillon Cower, Yingwei Li, Yurong You, Han Deng, CongCong Li, Dragomir Anguelov
Monocular image-based 3D perception has become an active research area in recent years owing to its applications in autonomous driving.
1 code implementation • 29 Mar 2022 • Ziyue Feng, Liang Yang, Longlong Jing, HaiYan Wang, YingLi Tian, Bing Li
Conventional self-supervised monocular depth prediction methods are based on a static environment assumption, which leads to accuracy degradation in dynamic scenes due to the mismatch and occlusion problems introduced by object motions.
1 code implementation • CVPR 2022 • Junfei Xiao, Longlong Jing, Lin Zhang, Ju He, Qi She, Zongwei Zhou, Alan Yuille, Yingwei Li
Our method achieves the state-of-the-art performance on three video action recognition benchmarks (i. e., Kinetics-400, UCF-101, and HMDB-51) under several typical semi-supervised settings (i. e., different ratios of labeled data).
1 code implementation • 22 Oct 2021 • Zhimin Chen, Longlong Jing, Yang Liang, YingLi Tian, Bing Li
This paper explores how the coherence of different modelities of 3D data (e. g. point cloud, image, and mesh) can be used to improve data efficiency for both 3D classification and retrieval tasks.
no code implementations • 29 Sep 2021 • Longlong Jing, Zhimin Chen, Bing Li, YingLi Tian
Our proposed novel self-supervised model learns two types of distinct features: modality-invariant features and modality-specific features.
2 code implementations • 20 Sep 2021 • Ziyue Feng, Longlong Jing, Peng Yin, YingLi Tian, Bing Li
Unlike the existing methods that use sparse LiDAR mainly in a manner of time-consuming iterative post-processing, our model fuses monocular image features and sparse LiDAR features to predict initial depth maps.
Ranked #1 on Depth Completion on KITTI
no code implementations • CVPR 2021 • Longlong Jing, Elahe Vahdani, Jiaxing Tan, YingLi Tian
Cross-modal retrieval aims to learn discriminative and modal-invariant features for data from different modalities.
no code implementations • 8 Aug 2020 • Longlong Jing, Elahe Vahdani, Jiaxing Tan, YingLi Tian
Cross-modal retrieval aims to learn discriminative and modal-invariant features for data from different modalities.
no code implementations • 28 May 2020 • Longlong Jing, Yu-cheng Chen, Ling Zhang, Mingyi He, YingLi Tian
By exploring the inherent multi-modality attributes of 3D objects, in this paper, we propose to jointly learn modal-invariant and view-invariant features from different modalities including image, point cloud, and mesh with heterogeneous networks for 3D data.
no code implementations • LREC 2020 • Saad Hassan, Larwan Berke, Elahe Vahdani, Longlong Jing, YingLi Tian, Matt Huenerfauth
We have collected a new dataset consisting of color and depth videos of fluent American Sign Language (ASL) signers performing sequences of 100 ASL signs from a Kinect v2 sensor.
no code implementations • 1 May 2020 • Elahe Vahdani, Longlong Jing, YingLi Tian, Matt Huenerfauth
Our system is able to recognize grammatical elements on ASL-HW-RGBD from manual gestures, facial expressions, and head movements and successfully detect 8 ASL grammatical mistakes.
no code implementations • 13 Apr 2020 • Longlong Jing, Yu-cheng Chen, Ling Zhang, Mingyi He, YingLi Tian
Specifically, 2D image features of rendered images from different views are extracted by a 2D convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network.
no code implementations • 29 Feb 2020 • Longlong Jing, Toufiq Parag, Zhe Wu, YingLi Tian, Hongcheng Wang
To minimize the dependence on a large annotated dataset, our proposed semi-supervised method trains from a small number of labeled examples and exploits two regulatory signals from unlabeled data.
no code implementations • 7 Jun 2019 • Longlong Jing, Elahe Vahdani, Matt Huenerfauth, YingLi Tian
In this paper, we propose a 3D Convolutional Neural Network (3DCNN) based multi-stream framework to recognize American Sign Language (ASL) manual signs (consisting of movements of the hands, as well as non-manual face movements in some cases) in real-time from RGB-D videos, by fusing multimodality features including hand gestures, facial expressions, and body poses from multi-channels (RGB, depth, motion, and skeleton joints).
no code implementations • 16 Feb 2019 • Longlong Jing, YingLi Tian
This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos.
Self-Supervised Image Classification Self-Supervised Learning
1 code implementation • 11 Jan 2019 • Jiaxing Tan, Longlong Jing, Yumei Huo, YingLi Tian, Oguz Akin
Lung segmentation in computerized tomography (CT) images is an important procedure in various lung disease diagnosis.
no code implementations • 28 Dec 2018 • Longlong Jing, Yu-cheng Chen, YingLi Tian
The enhanced coarse mask is fed to a fully convolutional neural network to be recursively refined.
no code implementations • 28 Nov 2018 • Longlong Jing, Xiaodong Yang, Jingen Liu, YingLi Tian
The success of deep neural networks generally requires a vast amount of training data to be labeled, which is expensive and unfeasible in scale, especially for video collections.
Ranked #42 on Self-Supervised Action Recognition on HMDB51
Self-Supervised Action Recognition Temporal Action Localization +1