no code implementations • 29 Aug 2023 • Taotao Jing, Lichen Wang, Naji Khosravan, Zhiqiang Wan, Zachary Bessinger, Zhengming Ding, Sing Bing Kang
iBARLE consists of (1) Appearance Variation Generation (AVG) module, which promotes visual appearance domain generalization, (2) Complex Structure Mix-up (CSMix) module, which enhances generalizability w. r. t.
no code implementations • CVPR 2022 • Yi Xu, Lichen Wang, Yizhou Wang, Yun Fu
To the best of our knowledge, our work is the pioneer which fills the gap in benchmarks and techniques for practical pedestrian trajectory prediction across different domains.
1 code implementation • 12 Dec 2021 • Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, Yun Fu
Semi-supervised domain adaptation (SSDA) is quite a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
1 code implementation • NeurIPS 2021 • Can Qin, Handong Zhao, Lichen Wang, Huan Wang, Yulun Zhang, Yun Fu
For slow learning of graph similarity, this paper proposes a novel early-fusion approach by designing a co-attention-based feature fusion network on multilevel GNN features.
2 code implementations • 12 Oct 2021 • Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li, Yun Fu
Current Sign Language Recognition (SLR) methods usually extract features via deep neural networks and suffer overfitting due to limited and noisy data.
no code implementations • 29 Sep 2021 • Yi Xu, Lichen Wang, Yizhou Wang, Can Qin, Yulun Zhang, Yun Fu
In this paper, we propose a novel framework, MemREIN, which considers Memorized, Restitution, and Instance Normalization for cross-domain few-shot learning.
3 code implementations • 16 Mar 2021 • Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li, Yun Fu
Sign language is commonly used by deaf or speech impaired people to communicate but requires significant effort to master.
Ranked #2 on
Sign Language Recognition
on AUTSL
(using extra training data)
no code implementations • 1 Jan 2021 • Lichen Wang, Bo Zong, Yunyu Liu, Can Qin, Wei Cheng, Wenchao Yu, Xuchao Zhang, Haifeng Chen, Yun Fu
As texts always contain a large proportion of task-irrelevant words, accurate alignment between aspects and their sentimental descriptions is the most crucial and challenging step.
no code implementations • 16 Dec 2020 • Jiahua Dong, Yang Cong, Gan Sun, Bingtao Ma, Lichen Wang
Moreover, the performance of advanced approaches degrades dramatically for past learned classes (i. e., catastrophic forgetting), due to the irregular and redundant geometric structures of 3D point cloud data.
no code implementations • 16 Sep 2020 • Denghui Zhang, Junming Liu, HengShu Zhu, Yanchi Liu, Lichen Wang, Pengyang Wang, Hui Xiong
However, it is still a challenging task since (1) the job title and job transition (job-hopping) data is messy which contains a lot of subjective and non-standard naming conventions for the same position (e. g., Programmer, Software Development Engineer, SDE, Implementation Engineer), (2) there is a large amount of missing title/transition information, and (3) one talent only seeks limited numbers of jobs which brings the incompleteness and randomness modeling job transition patterns.
no code implementations • 14 Sep 2020 • Yue Bai, Zhiqiang Tao, Lichen Wang, Sheng Li, Yu Yin, Yun Fu
Extensive experiments on four action datasets illustrate the proposed CAM achieves better results for each view and also boosts multi-view performance.
no code implementations • ICLR 2020 • Lichen Wang, Bo Zong, Qianqian Ma, Wei Cheng, Jingchao Ni, Wenchao Yu, Yanchi Liu, Dongjin Song, Haifeng Chen, Yun Fu
Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain.
1 code implementation • 6 Feb 2020 • Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, Yun Fu
Current adversarial adaptation methods attempt to align the cross-domain features, whereas two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
no code implementations • 24 Nov 2019 • Yue Bai, Lichen Wang, Zhiqiang Tao, Sheng Li, Yun Fu
Multi-view time series classification (MVTSC) aims to improve the performance by fusing the distinctive temporal information from multiple views.
2 code implementations • NeurIPS 2019 • Can Qin, Haoxuan You, Lichen Wang, C. -C. Jay Kuo, Yun Fu
Specifically, most general-purpose DA methods that struggle for global feature alignment and ignore local geometric information are not suitable for 3D domain alignment.
Ranked #1 on
Unsupervised Domain Adaptation
on PreSIL to KITTI
no code implementations • ICCV 2019 • Lichen Wang, Zhengming Ding, Zhiqiang Tao, Yunyu Liu, Yun Fu
Multi-view action recognition targets to integrate complementary information from different views to improve classification performance.
1 code implementation • 20 Apr 2019 • Lichen Wang, Bin Sun, Joseph Robinson, Taotao Jing, Yun Fu
To make up this, we introduce a new, large-scale EV-Action dataset in this work, which consists of RGB, depth, electromyography (EMG), and two skeleton modalities.
Ranked #4 on
Multimodal Activity Recognition
on EV-Action
no code implementations • 22 Nov 2018 • Lichen Wang, Jiaxiang Wu, Shao-Lun Huang, Lizhong Zheng, Xiangxiang Xu, Lin Zhang, Junzhou Huang
We further generalize the framework to handle more than two modalities and missing modalities.
20 code implementations • ECCV 2018 • Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, Yun Fu
To solve these problems, we propose the very deep residual channel attention networks (RCAN).
Ranked #21 on
Image Super-Resolution
on BSD100 - 4x upscaling