no code implementations • ICML 2020 • Ching-Wei Cheng, Xingye Qiao, Guang Cheng
In this article, we study a new paradigm called mutual transfer learning where among many heterogeneous data domains, every data domain could potentially be the target of interest, and it could also be a useful source to help the learning in other data domains.
no code implementations • 20 Sep 2022 • Zhou Wang, Xingye Qiao
Set-valued classification, a new classification paradigm that aims to identify all the plausible classes that an observation belongs to, can be obtained by learning the acceptance regions for all classes.
no code implementations • 26 Feb 2022 • Jiexin Duan, Xingye Qiao, Guang Cheng
In machine learning, crowdsourcing is an economical way to label a large amount of data.
no code implementations • NeurIPS 2020 • Jiexin Duan, Xingye Qiao, Guang Cheng
It is interesting to note that the weighted voting scheme allows a larger number of subsamples than the majority voting one.
no code implementations • 26 Jun 2020 • Zhao Ren, Sungkyu Jung, Xingye Qiao
The convergence rates of estimation errors and risk of the CLIPS classifier are established to show that having multiple observations in a set leads to faster convergence rates, compared to the standard classification situation in which there is only one observation in the set.
no code implementations • 21 Apr 2020 • Haomiao Meng, Xingye Qiao
The consistency for our CATE estimator is guaranteed if either the main effect model or the propensity score model is correctly specified.
Methodology
1 code implementation • 6 Apr 2020 • Haomiao Meng, Ying-Qi Zhao, Haoda Fu, Xingye Qiao
These numerical studies have shown the usefulness of the proposed A-ITR framework.
no code implementations • 3 Nov 2019 • Andrew Cohen, Lei Yu, Xingye Qiao, Xiangrong Tong
A theoretical investigation shows that the set of policies learned by MEDE capture the same modalities as the optimal maximum entropy policy.
1 code implementation • NeurIPS 2019 • Xingye Qiao, Jiexin Duan, Guang Cheng
Nearest neighbor is a popular class of classification methods with many desirable properties.
no code implementations • 7 Jun 2019 • Angel Beltre, Shehtab Zaman, Kenneth Chiu, Sudhakar Pamidighantam, Xingye Qiao, Madhusudhan Govindaraju
Most codes are not so arbitrary, however, and there has been significant prior research on predicting the run time of applications and workloads.
no code implementations • 10 Feb 2019 • Andrew Cohen, Xingye Qiao, Lei Yu, Elliot Way, Xiangrong Tong
We address the challenge of effective exploration while maintaining good performance in policy gradient methods.
no code implementations • NeurIPS 2018 • Wenbo Wang, Xingye Qiao
The goal of confidence-set learning in the binary classification setting is to construct two sets, each with a specific probability guarantee to cover a class.
no code implementations • 9 Jan 2017 • Chong Zhang, Wenbo Wang, Xingye Qiao
In many real applications of statistical learning, a decision made from misclassification can be too costly to afford; in this case, a reject option, which defers the decision until further investigation is conducted, is often preferred.
no code implementations • 21 Sep 2015 • Qiyi Lu, Xingye Qiao
Although both are challenging questions for the high-dimensional, low-sample size data, there has been some recent development for both.
no code implementations • 17 Sep 2015 • Qiyi Lu, Xingye Qiao
In many real applications, it is costly to manually place labels on observations; hence it is often that only a small portion of labeled data is available while a large number of observations are left without a label.
no code implementations • 13 May 2015 • Xingye Qiao
We show by simulated and data examples that the proposed method can improve the classification performance for ordinal data without the ambiguity caused by boundary crossings.
no code implementations • 26 May 2014 • Wei Sun, Xingye Qiao, Guang Cheng
In this paper, we introduce a general measure of classification instability (CIS) to quantify the sampling variability of the prediction made by a classification method.
no code implementations • 19 Feb 2014 • Sungkyu Jung, Xingye Qiao
In particular, the method of principal component analysis is used to extract the features of major variation.
no code implementations • 11 Oct 2013 • Xingye Qiao, Lingsong Zhang
The proposed Distance-weighted Support Vector Machine method can be viewed as a hybrid of SVM and DWD that finds the classification direction by minimizing mainly the DWD loss, and determines the intercept term in the SVM manner.
no code implementations • 11 Oct 2013 • Xingye Qiao, Lingsong Zhang
Simulations and real data applications are investigated to illustrate the usefulness of the FLAME classifiers.