no code implementations • 9 Jul 2023 • Boxiang Zhang, Zunran Wang, Yonggen Ling, Yuanyuan Guan, Shenghao Zhang, Wenhui Li
Existing methods of cross-modal domain adaptation for 3D semantic segmentation predict results only via 2D-3D complementarity that is obtained by cross-modal feature matching.
no code implementations • 6 Mar 2023 • Kaspar Althoefer, Yonggen Ling, Wanlin Li, Xinyuan Qian, Wang Wei Lee, Peng Qi
The human tactile system is composed of various types of mechanoreceptors, each able to perceive and process distinct information such as force, pressure, texture, etc.
no code implementations • 19 Sep 2022 • Haoxian Zhang, Yonggen Ling
Second, we jointly learn the homography and visibility that links camera-object relative motions with occlusions.
no code implementations • 15 Nov 2020 • Zidong Guo, Zejian yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, Shenghao Zhang
In domain adaption, we design an embedding representation with prediction consistency to ensure that the linear relationship between gaze directions in different domains remains consistent on gaze space and embedding space.
no code implementations • 16 Jul 2020 • Ziyang Song, Zejian yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, Shenghao Zhang
In recognition-based action interaction, robots' responses to human actions are often pre-designed according to recognized categories and thus stiff.
no code implementations • 2 Jul 2020 • Ziyang Song, Ziyi Yin, Zejian yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, Shenghao Zhang
Despite the notable progress made in action recognition tasks, not much work has been done in action recognition specifically for human-robot interaction.
1 code implementation • 25 Oct 2019 • Yajing Chen, Fanzi Wu, Zeyu Wang, Yibing Song, Yonggen Ling, Linchao Bao
The displacement map and the coarse model are used to render a final detailed face, which again can be compared with the original input image to serve as a photometric loss for the second stage.
1 code implementation • CVPR 2019 • Fanzi Wu, Linchao Bao, Yajing Chen, Yonggen Ling, Yibing Song, Songnan Li, King Ngi Ngan, Wei Liu
The main ingredient of the view alignment loss is a differentiable dense optical flow estimator that can backpropagate the alignment errors between an input view and a synthetic rendering from another input view, which is projected to the target view through the 3D shape to be inferred.
no code implementations • 26 Mar 2019 • Yonggen Ling, Kaixuan Wang, Shaojie Shen
This paper presents a probabilistic approach for online dense reconstruction using a single monocular camera moving through the environment.
Robotics
no code implementations • ECCV 2018 • Yonggen Ling, Linchao Bao, Zequn Jie, Fengming Zhu, Ziyang Li, Shanmin Tang, Yongsheng Liu, Wei Liu, Tong Zhang
Our approach is able to handle the rolling-shutter effects and imperfect sensor synchronization in a unified way.
no code implementations • CVPR 2018 • Zequn Jie, Pengfei Wang, Yonggen Ling, Bo Zhao, Yunchao Wei, Jiashi Feng, Wei Liu
Left-right consistency check is an effective way to enhance the disparity estimation by referring to the information from the opposite view.