no code implementations • ICML 2020 • Ning Xu, Yun-Peng Liu, Jun Shu, Xin Geng
Label distribution covers a certain number of labels, representing the degree to which each label describes the instance.
no code implementations • 11 Aug 2022 • Lei Qi, Hongpeng Yang, Yinghuan Shi, Xin Geng
To address the task, we first analyze the theory of the multi-domain learning, which highlights that 1) mitigating the impact of domain gap and 2) exploiting all samples to train the model can effectively reduce the generalization error in each source domain so as to improve the quality of pseudo-labels.
1 code implementation • 11 Aug 2022 • Tiankai Hang, Huan Yang, Bei Liu, Jianlong Fu, Xin Geng, Baining Guo
Specifically, we propose a recurrent motion generator to extract a series of semantic and motion information from the language and feed it along with visual information to a pre-trained StyleGAN to generate high-quality frames.
no code implementations • 2 Jun 2022 • Ning Xu, Jiaqi Lv, Biao Liu, Congyu Qiao, Xin Geng
Partial label learning (PLL) aims to train multi-class classifiers from instances with partial labels (PLs)-a PL for an instance is a set of candidate labels where a fixed but unknown candidate is the true label.
1 code implementation • 1 Jun 2022 • Ning Xu, Congyu Qiao, Jiaqi Lv, Xin Geng, Min-Ling Zhang
To cope with the challenge, we investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label, and show that one can successfully learn a theoretically grounded multi-label classifier for the problem.
no code implementations • 12 Apr 2022 • Lei Qi, Jiaying Shen, Jiaqi Liu, Yinghuan Shi, Xin Geng
Besides, for the label distribution of each class, we further revise it to give more and equal attention to the other domains that the class does not belong to, which can effectively reduce the domain gap across different domains and obtain the domain-invariant feature.
no code implementations • 8 Apr 2022 • Jin Yuan, Feng Hou, Yangzhou Du, Zhongchao shi, Xin Geng, Jianping Fan, Yong Rui
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data, and multi-source domain adaptation (MSDA) is very attractive for real world applications.
no code implementations • 8 Apr 2022 • Congyu Qiao, Ning Xu, Xin Geng
Most existing PLL approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels and model the generation process of the candidate labels in a simple way.
no code implementations • 8 Mar 2022 • Jin Yuan, Shikai Chen, Yao Zhang, Zhongchao shi, Xin Geng, Jianping Fan, Yong Rui
Subsequently, we design the graph attention transformer layer to transfer this adjacency matrix to adapt to the current domain.
no code implementations • 24 Jan 2022 • Lei Qi, Lei Wang, Yinghuan Shi, Xin Geng
Different from the conventional data augmentation, the proposed domain-aware mix-normalization to enhance the diversity of features during training from the normalization view of the neural network, which can effectively alleviate the model overfitting to the source domains, so as to boost the generalization capability of the model in the unseen domain.
no code implementations • 15 Jan 2022 • Wenyan Pan, Zhili Zhou, Miaogen Ling, Xin Geng, Q. M. Jonathan Wu
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
no code implementations • 30 Nov 2021 • Lei Qi, Lei Wang, Yinghuan Shi, Xin Geng
A significance of our work lies in that it shows the potential of unsupervised domain generalization for person ReID and sets a strong baseline for the further research on this topic.
2 code implementations • 22 Nov 2021 • Boyu Zhang, Jiayuan Chen, Yinfei Xu, HUI ZHANG, Xu Yang, Xin Geng
Traditionally, AQA is treated as a regression problem to learn the underlying mappings between videos and action scores.
Ranked #1 on
Action Quality Assessment
on JIGSAWS
1 code implementation • NeurIPS 2021 • Ning Xu, Congyu Qiao, Xin Geng, Min-Ling Zhang
In this paper, we consider instance-dependent PLL and assume that each example is associated with a latent label distribution constituted by the real number of each label, representing the degree to each label describing the feature.
1 code implementation • 12 Jun 2021 • Qiufeng Wang, Xin Geng, Shuxia Lin, Shiyu Xia, Lei Qi, Ning Xu
Moreover, the learngene, i. e., the gene for learning initialization rules of the target model, is proposed to inherit the meta-knowledge from the collective model and reconstruct a lightweight individual model on the target task.
no code implementations • 11 Jun 2021 • Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).
no code implementations • 1 Apr 2021 • Hao Yang, Youzhi Jin, Ziyin Li, Deng-Bao Wang, Lei Miao, Xin Geng, Min-Ling Zhang
During the training process, DLT records the loss value of each sample and calculates dynamic loss thresholds.
no code implementations • 18 Sep 2020 • Jiaqi Lv, Tianran Wu, Chenglun Peng, Yun-Peng Liu, Ning Xu, Xin Geng
In this paper, we present a compact learning (CL) framework to embed the features and labels simultaneously and with mutual guidance.
no code implementations • NeurIPS 2020 • Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
1 code implementation • 3 Jul 2020 • Bin-Bin Gao, Xin-Xin Liu, Hong-Yu Zhou, Jianxin Wu, Xin Geng
The effectiveness of our approach has been demonstrated on both facial age and attractiveness estimation tasks.
Ranked #1 on
Age Estimation
on ChaLearn 2016
1 code implementation • ICML 2020 • Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
no code implementations • 2 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Xin Geng, Yang Gao
To achieve the camera alignment, we develop a Multi-Camera Adversarial Learning (MCAL) to map images of different cameras into a shared subspace.
no code implementations • 14 May 2019 • Dongdong Yu, Kai Su, Xin Geng, Changhu Wang
In this paper, a novel Context-and-Spatial Aware Network (CSANet), which integrates both a Context Aware Path and Spatial Aware Path, is proposed to obtain effective features involving both context information and spatial information.
no code implementations • CVPR 2019 • Kai Su, Dongdong Yu, Zhenqi Xu, Xin Geng, Changhu Wang
Multi-person pose estimation is an important but challenging problem in computer vision.
1 code implementation • 13 Jul 2018 • Bin-Bin Gao, Hong-Yu Zhou, Jianxin Wu, Xin Geng
Age estimation performance has been greatly improved by using convolutional neural network.
no code implementations • 26 Jun 2017 • Ruifeng Shao, Ning Xu, Xin Geng
To solve this problem, we assume that each multi-label instance is described by a vector of latent real-valued labels, which can reflect the importance of the corresponding labels.
1 code implementation • 6 Nov 2016 • Bin-Bin Gao, Chao Xing, Chen-Wei Xie, Jianxin Wu, Xin Geng
However, it is difficult to collect sufficient training images with precise labels in some domains such as apparent age estimation, head pose estimation, multi-label classification and semantic segmentation.
Ranked #1 on
Head Pose Estimation
on BJUT-3D
no code implementations • CVPR 2016 • Chao Xing, Xin Geng, Hui Xue
In order to learn this general model family, this paper uses a method called Logistic Boosting Regression (LogitBoost) which can be seen as an additive weighted function regression from the statistical viewpoint.
no code implementations • 26 Aug 2014 • Xin Geng
This paper proposes six working LDL algorithms in three ways: problem transformation, algorithm adaptation, and specialized algorithm design.
no code implementations • CVPR 2014 • Xin Geng, Yu Xia
Accurate ground truth pose is essential to the training of most existing head pose estimation algorithms.
no code implementations • CVPR 2014 • Xin Geng, Longrun Luo
The key idea is to learn a latent preference distribution for each instance.