Search Results for author: Xiyu Yu

Found 8 papers, 2 papers with code

Label-Noise Robust Domain Adaptation

no code implementations ICML 2020 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

Domain adaptation aims to correct the classifiers when faced with distribution shift between source (training) and target (test) domains.

Denoising Domain Adaptation

FaceController: Controllable Attribute Editing for Face in the Wild

no code implementations23 Feb 2021 Zhiliang Xu, Xiyu Yu, Zhibin Hong, Zhen Zhu, Junyu Han, Jingtuo Liu, Errui Ding, Xiang Bai

By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.

 Ranked #1 on Face Swapping on FaceForensics++ (FID metric)

Disentanglement Face Swapping

An Efficient and Provable Approach for Mixture Proportion Estimation Using Linear Independence Assumption

no code implementations CVPR 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, Kayhan Batmanghelich, DaCheng Tao

In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution.

Learning with Biased Complementary Labels

1 code implementation ECCV 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, DaCheng Tao

We therefore reason that the transition probabilities will be different.

Transfer Learning with Label Noise

no code implementations31 Jul 2017 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

However, when learning this invariant knowledge, existing methods assume that the labels in source domain are uncontaminated, while in reality, we often have access to source data with noisy labels.

Denoising Transfer Learning

On Compressing Deep Models by Low Rank and Sparse Decomposition

no code implementations CVPR 2017 Xiyu Yu, Tongliang Liu, Xinchao Wang, DaCheng Tao

Deep compression refers to removing the redundancy of parameters and feature maps for deep learning models.

Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex Composite optimization

no code implementations2 Jun 2016 Xiyu Yu, DaCheng Tao

To the best of our knowledge, this is the first analysis of convergence rate of variance-reduced proximal stochastic gradient for non-convex composite optimization.

Cannot find the paper you are looking for? You can Submit a new open access paper.