no code implementations • 19 May 2018 • Qi Qian, Shenghuo Zhu, Jiasheng Tang, Rong Jin, Baigui Sun, Hao Li
Hence, we propose to learn the model and the adversarial distribution simultaneously with the stochastic algorithm for efficiency.
no code implementations • CVPR 2018 • Qi Qian, Jiasheng Tang, Hao Li, Shenghuo Zhu, Rong Jin
Furthermore, we can show that the metric is learned from latent examples only, but it can preserve the large margin property even for the original data.
1 code implementation • 20 Jan 2021 • Fei Du, Bo Xu, Jiasheng Tang, Yuqi Zhang, Fan Wang, Hao Li
We extend the classical tracking-by-detection paradigm to this tracking-any-object task.
Ranked #7 on Multi-Object Tracking on TAO (using extra training data)
1 code implementation • 14 May 2021 • Chong Liu, Yuqi Zhang, Hao Luo, Jiasheng Tang, Weihua Chen, Xianzhe Xu, Fan Wang, Hao Li, Yi-Dong Shen
Multi-Target Multi-Camera Tracking has a wide range of applications and is the basis for many advanced inferences and predictions.
1 code implementation • CVPR 2023 • Jiefeng Li, Siyuan Bian, Qi Liu, Jiasheng Tang, Fan Wang, Cewu Lu
In this work, we present NIKI (Neural Inverse Kinematics with Invertible Neural Network), which models bi-directional errors to improve the robustness to occlusions and obtain pixel-aligned accuracy.
Ranked #1 on 3D Human Pose Estimation on AGORA
no code implementations • 15 Feb 2024 • Zhiwei Tang, Jiasheng Tang, Hao Luo, Fan Wang, Tsung-Hui Chang
Our experiments demonstrate that ParaTAA can decrease the inference steps required by common sequential sampling algorithms such as DDIM and DDPM by a factor of 4~14 times.
no code implementations • 2 Mar 2024 • Siyuan Bian, Jiefeng Li, Jiasheng Tang, Cewu Lu
Accurate human shape recovery from a monocular RGB image is a challenging task because humans come in different shapes and sizes and wear different clothes.
1 code implementation • 18 Mar 2024 • Wangbo Zhao, Jiasheng Tang, Yizeng Han, Yibing Song, Kai Wang, Gao Huang, Fan Wang, Yang You
Existing parameter-efficient fine-tuning (PEFT) methods have achieved significant success on vision transformers (ViTs) adaptation by improving parameter efficiency.