Search Results for author: Zheyan Shen

Found 19 papers, 7 papers with code

Meta Adaptive Task Sampling for Few-Domain Generalization

no code implementations25 May 2023 Zheyan Shen, Han Yu, Peng Cui, Jiashuo Liu, Xingxuan Zhang, Linjun Zhou, Furui Liu

Moreover, we propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.

Domain Generalization

Stable Learning via Sparse Variable Independence

no code implementations2 Dec 2022 Han Yu, Peng Cui, Yue He, Zheyan Shen, Yong Lin, Renzhe Xu, Xingxuan Zhang

The problem of covariate-shift generalization has attracted intensive research attention.

Variable Selection

NICO++: Towards Better Benchmarking for Domain Generalization

2 code implementations CVPR 2023 Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, Peng Cui

Most current evaluation methods for domain generalization (DG) adopt the leave-one-out strategy as a compromise on the limited number of domains.

Benchmarking Domain Generalization +2

Regulatory Instruments for Fair Personalized Pricing

1 code implementation9 Feb 2022 Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu

Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors.

Integrated Latent Heterogeneity and Invariance Learning in Kernel Space

no code implementations NeurIPS 2021 Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen

The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.

A Theoretical Analysis on Independence-driven Importance Weighting for Covariate-shift Generalization

1 code implementation3 Nov 2021 Renzhe Xu, Xingxuan Zhang, Zheyan Shen, Tong Zhang, Peng Cui

Afterward, we prove that under ideal conditions, independence-driven importance weighting algorithms could identify the variables in this set.

feature selection

Kernelized Heterogeneous Risk Minimization

1 code implementation24 Oct 2021 Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen

The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.

Towards Out-Of-Distribution Generalization: A Survey

no code implementations31 Aug 2021 Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui

This paper represents the first comprehensive, systematic review of OOD generalization, encompassing a spectrum of aspects from problem definition, methodological development, and evaluation procedures, to the implications and future directions of the field.

Out-of-Distribution Generalization Representation Learning

Towards Unsupervised Domain Generalization

no code implementations CVPR 2022 Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, Haoxin Liu

Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains.

Domain Generalization Representation Learning

Distributionally Robust Learning with Stable Adversarial Training

no code implementations30 Jun 2021 Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li

In this paper, we propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target.

Heterogeneous Risk Minimization

1 code implementation9 May 2021 Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen

In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts.

Deep Stable Learning for Out-Of-Distribution Generalization

2 code implementations CVPR 2021 Xingxuan Zhang, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He, Zheyan Shen

Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise.

Domain Generalization Out-of-Distribution Generalization

Sample Balancing for Improving Generalization under Distribution Shifts

no code implementations1 Jan 2021 Xingxuan Zhang, Peng Cui, Renzhe Xu, Yue He, Linjun Zhou, Zheyan Shen

We propose to address this problem by removing the dependencies between features via reweighting training samples, which results in a more balanced distribution and helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between features and labels.

Domain Adaptation Object Recognition

Counterfactual Prediction for Bundle Treatment

no code implementations NeurIPS 2020 Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He

Estimating counterfactual outcome of different treatments from observational data is an important problem to assist decision making in a variety of fields.

counterfactual Decision Making +2

Algorithmic Decision Making with Conditional Fairness

1 code implementation18 Jun 2020 Renzhe Xu, Peng Cui, Kun Kuang, Bo Li, Linjun Zhou, Zheyan Shen, Wei Cui

In practice, there frequently exist a certain set of variables we term as fair variables, which are pre-decision covariates such as users' choices.

Decision Making Fairness

Stable Adversarial Learning under Distributional Shifts

no code implementations8 Jun 2020 Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li, Yishi Lin

Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data.

Stable Learning via Sample Reweighting

no code implementations28 Nov 2019 Zheyan Shen, Peng Cui, Tong Zhang, Kun Kuang

We consider the problem of learning linear prediction models with model misspecification bias.

Variable Selection

Towards Non-I.I.D. Image Classification: A Dataset and Baselines

no code implementations7 Jun 2019 Yue He, Zheyan Shen, Peng Cui

The experimental results demonstrate that NICO can well support the training of ConvNet model from scratch, and a batch balancing module can help ConvNets to perform better in Non-I. I. D.

Classification General Classification +1

Causally Regularized Learning with Agnostic Data Selection Bias

no code implementations22 Aug 2017 Zheyan Shen, Peng Cui, Kun Kuang, Bo Li, Peixuan Chen

However, this ideal assumption is often violated in real applications, where selection bias may arise between training and testing process.

regression Selection bias +1

Cannot find the paper you are looking for? You can Submit a new open access paper.