Search Results for author: Masashi Sugiyama

Found 218 papers, 67 papers with code

Variational Imitation Learning with Diverse-quality Demonstrations

1 code implementation ICML 2020 Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama

Learning from demonstrations can be challenging when the quality of demonstrations is diverse, and even more so when the quality is unknown and there is no additional information to estimate the quality.

Continuous Control Imitation Learning +1

Accelerating the diffusion-based ensemble sampling by non-reversible dynamics

no code implementations ICML 2020 Futoshi Futami, Issei Sato, Masashi Sugiyama

Compared with the naive parallel-chain SGLD that updates multiple particles independently, ensemble methods update particles with their interactions.

Bayesian Inference

Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients

1 code implementation7 Apr 2022 Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.

Federated Learning

On the Effectiveness of Adversarial Training against Backdoor Attacks

no code implementations22 Feb 2022 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

no code implementations7 Feb 2022 Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

First, we theoretically show that an adversary can upper-bound the distributional shift which guarantees the attack's invisibility.

Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification

no code implementations1 Feb 2022 Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama

There is a fundamental limitation in the prediction performance that a machine learning model can achieve due to the inevitable uncertainty of the prediction target.

Towards Adversarially Robust Deep Image Denoising

no code implementations12 Jan 2022 Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan

Secondly, to robustify DIDs, we propose an adversarial training strategy, hybrid adversarial training ({\sc HAT}), that jointly trains DIDs with adversarial and non-adversarial noisy data to ensure that the reconstruction quality is high and the denoisers around non-adversarial data are locally smooth.

Adversarial Attack Adversarial Robustness +1

Learning with Proper Partial Labels

no code implementations23 Dec 2021 Zhenguo Wu, Masashi Sugiyama

Recently, various approaches on partial-label learning have been proposed under different generation models of candidate label sets.

Partial Label Learning

Rethinking Importance Weighting for Transfer Learning

no code implementations19 Dec 2021 Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama

A key assumption in supervised learning is that training and test data follow the same probability distribution.

Selection bias Transfer Learning

Can Label-Noise Transition Matrix Help to Improve Sample Selection and Label Correction?

no code implementations29 Sep 2021 Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama

Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.

Learning with noisy labels

Rethinking Class-Prior Estimation for Positive-Unlabeled Learning

no code implementations ICLR 2022 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao

Hitherto, the distributional-assumption-free CPE methods rely on a critical assumption that the support of the positive data distribution cannot be contained in the support of the negative data distribution.

Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum

no code implementations29 Sep 2021 Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, Masashi Sugiyama

Specifically, we disentangle the effects of Adaptive Learning Rate and Momentum of the Adam dynamics on saddle-point escaping and flat minima selection.

Unsupervised Federated Learning is Possible

no code implementations ICLR 2022 Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.

Federated Learning

Collaborate to Defend Against Adversarial Attacks

no code implementations29 Sep 2021 Sen Cui, Jingfeng Zhang, Jian Liang, Masashi Sugiyama, ChangShui Zhang

However, an ensemble still wastes the limited capacity of multiple models.

Meta Discovery: Learning to Discover Novel Classes given Very Limited Data

no code implementations ICLR 2022 Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

In this paper, we demystify assumptions behind L2DNC and find that high-level semantic features should be shared among the seen and unseen classes.

Meta-Learning

Exploiting Class Activation Value for Partial-Label Learning

no code implementations ICLR 2022 Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama

As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.

Multi-class Classification Partial Label Learning

Active Refinement for Multi-Label Learning: A Pseudo-Label Approach

no code implementations29 Sep 2021 Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts.

Active Learning Multi-Label Learning

Does Adversarial Robustness Really Imply Backdoor Vulnerability?

no code implementations29 Sep 2021 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.

Adversarial Robustness

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

1 code implementation16 Jul 2021 Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama

In this paper, we consider the task of predicting $Y$ from $X$ when we have no paired data of them, but we have two separate, independent datasets of $X$ and $Y$ each observed with some mediating variable $U$, that is, we have two datasets $S_X = \{(X_i, U_i)\}$ and $S_Y = \{(U'_j, Y'_j)\}$.

Seeing Differently, Acting Similarly: Imitation Learning with Heterogeneous Observations

no code implementations17 Jun 2021 Xin-Qiang Cai, Yao-Xiang Ding, Zi-Xuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou

So in this work, we model the observation mismatch in the imitation learning problem with the above two challenges as a two-phase learning process, namely Heterogeneously Observable Imitation Learning (HOIL).

Imitation Learning

Multi-Class Classification from Single-Class Data with Confidences

no code implementations16 Jun 2021 Yuzhou Cao, Lei Feng, Senlin Shu, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

We show that without any assumptions on the loss functions, models, and optimizers, we can successfully learn a multi-class classifier from only data of a single class with a rigorous consistency guarantee when confidences (i. e., the class-posterior probabilities for all the classes) are available.

Classification Multi-class Classification

Probabilistic Margins for Instance Reweighting in Adversarial Training

1 code implementation NeurIPS 2021 Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.

Adversarial Robustness

On the Robustness of Average Losses for Partial-Label Learning

no code implementations11 Jun 2021 Jiaqi Lv, Lei Feng, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label (PL) learning is a typical weakly supervised classification problem, where a PL of an instance is a set of candidate labels such that a fixed but unknown candidate is the true label.

Partial Label Learning Weakly Supervised Classification

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

no code implementations NeurIPS 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.

Learning with noisy labels

Instance Correction for Learning with Open-set Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.

NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?

no code implementations31 May 2021 Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama

Adversarial training (AT) based on minimax optimization is a popular learning style that enhances the model's adversarial robustness.

Adversarial Robustness

A unified view of likelihood ratio and reparameterization gradients

no code implementations31 May 2021 Paavo Parmas, Masashi Sugiyama

Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used to estimate gradients of expectations throughout machine learning and reinforcement learning; however, they are usually explained as simple mathematical tricks, with no insight into their nature.

reinforcement-learning

Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization

2 code implementations31 Mar 2021 Zeke Xie, Li Yuan, Zhanxing Zhu, Masashi Sugiyama

It is well-known that stochastic gradient noise (SGN) acts as implicit regularization for deep learning and is essentially important for both optimization and generalization of deep networks.

Approximating Instance-Dependent Noise via Instance-Confidence Embedding

1 code implementation25 Mar 2021 Yivan Zhang, Masashi Sugiyama

Label noise in multiclass classification is a major obstacle to the deployment of learning systems.

Text Classification

Discovering Diverse Solutions in Deep Reinforcement Learning by Maximizing State-Action-Based Mutual Information

1 code implementation12 Mar 2021 Takayuki Osa, Voot Tangkaratt, Masashi Sugiyama

In our method, a policy conditioned on a continuous or discrete latent variable is trained by directly maximizing the variational lower bound of the mutual information, instead of using the mutual information as unsupervised rewards as in previous studies.

Continuous Control reinforcement-learning

Lower-Bounded Proper Losses for Weakly Supervised Classification

1 code implementation4 Mar 2021 Shuhei M. Yoshida, Takashi Takenouchi, Masashi Sugiyama

To this end, we derive a representation theorem for proper losses in supervised learning, which dualizes the Savage representation.

Classification General Classification +1

LocalDrop: A Hybrid Regularization for Deep Neural Networks

no code implementations1 Mar 2021 Ziqing Lu, Chang Xu, Bo Du, Takashi Ishida, Lefei Zhang, Masashi Sugiyama

In neural networks, developing regularization algorithms to settle overfitting is one of the major study areas.

Incorporating Causal Graphical Prior Knowledge into Predictive Modeling via Simple Data Augmentation

1 code implementation27 Feb 2021 Takeshi Teshima, Masashi Sugiyama

Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.

Data Augmentation

Guided Interpolation for Adversarial Training

no code implementations15 Feb 2021 Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama

To enhance adversarial robustness, adversarial training learns deep neural networks on the adversarial variants generated by their natural data.

Adversarial Robustness

Learning from Similarity-Confidence Data

no code implementations13 Feb 2021 Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

Weakly supervised learning has drawn considerable attention recently to reduce the expensive time and labor consumption of labeling massive data.

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

2 code implementations10 Feb 2021 Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama

By comparing \textit{non-robust} (normally trained) and \textit{robustified} (adversarially trained) models, we observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.

Adversarial Robustness feature selection

Demystifying Assumptions in Learning to Discover Novel Classes

no code implementations8 Feb 2021 Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

In learning to discover novel classes (L2DNC), we are given labeled data from seen classes and unlabeled data from unseen classes, and we train clustering models for the unseen classes.

Meta-Learning

Understanding the Interaction of Adversarial Training with Noisy Labels

no code implementations6 Feb 2021 Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama

A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.

Provably End-to-end Label-Noise Learning without Anchor Points

1 code implementation4 Feb 2021 Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama

In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.

Learning with noisy labels

Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization

1 code implementation4 Feb 2021 Yivan Zhang, Gang Niu, Masashi Sugiyama

To estimate the transition matrix from noisy data, existing methods often need to estimate the noisy class-posterior, which could be unreliable due to the overconfidence of neural networks.

Weakly Supervised Classification

Learning Diverse-Structured Networks for Adversarial Robustness

1 code implementation3 Feb 2021 Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama

In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).

Adversarial Robustness

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification

1 code implementation1 Feb 2021 Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama

SSC can be solved by a standard (multi-class) classification method, and we use the SSC solution to obtain the final binary classifier through a certain linear-fractional transformation.

Classification General Classification +1

Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics

no code implementations19 Jan 2021 Masato Ishii, Masashi Sugiyama

In this setting, we cannot access source data during adaptation, while unlabeled target data and a model pretrained with source data are given.

Domain Adaptation

A Symmetric Loss Perspective of Reliable Machine Learning

no code implementations5 Jan 2021 Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama

When minimizing the empirical risk in binary classification, it is a common practice to replace the zero-one loss with a surrogate loss to make the learning objective feasible to optimize.

General Classification Robust classification

On the Role of Pre-training for Meta Few-Shot Learning

no code implementations1 Jan 2021 Chia-You Chen, Hsuan-Tien Lin, Gang Niu, Masashi Sugiyama

One is to (pre-)train a classifier with examples from known classes, and then transfer the pre-trained classifier to unknown classes using the new examples.

Disentanglement Few-Shot Learning

Understanding and Scheduling Weight Decay

2 code implementations23 Nov 2020 Zeke Xie, Issei Sato, Masashi Sugiyama

Our work aims at theoretically understanding novel behaviors of weight decay and designing schedulers for weight decay in deep learning.

On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective

no code implementations CVPR 2021 Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, Masashi Sugiyama

In this paper, we first prove that the focal loss is classification-calibrated, i. e., its minimizer surely yields the Bayes-optimal classifier and thus the use of the focal loss in classification can be theoretically justified.

Classification General Classification +2

Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting

1 code implementation12 Nov 2020 Zeke Xie, Fengxiang He, Shaopeng Fu, Issei Sato, DaCheng Tao, Masashi Sugiyama

Thus it motivates us to design a similar mechanism named {\it artificial neural variability} (ANV), which helps artificial neural networks learn some advantages from ``natural'' neural networks.

A Survey of Label-noise Representation Learning: Past, Present and Future

1 code implementation9 Nov 2020 Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama

Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.

Learning Theory Representation Learning

Binary classification with ambiguous training data

no code implementations5 Nov 2020 Naoya Otani, Yosuke Otsubo, Tetsuya Koike, Masashi Sugiyama

This problem is substantially different from semi-supervised learning since unlabeled samples are not necessarily difficult samples.

Classification General Classification

Classification with Rejection Based on Cost-sensitive Classification

no code implementations22 Oct 2020 Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama

The goal of classification with rejection is to avoid risky misclassification in error-critical applications such as medical diagnosis and product inspection.

Classification General Classification +1

Maximum Mean Discrepancy Test is Aware of Adversarial Attacks

2 code implementations22 Oct 2020 Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama

However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.

Adversarial Attack Detection

Robust Imitation Learning from Noisy Demonstrations

1 code implementation20 Oct 2020 Voot Tangkaratt, Nontawat Charoenphakdee, Masashi Sugiyama

Robust learning from noisy demonstrations is a practical but highly challenging problem in imitation learning.

Classification Continuous Control +2

Geometry-aware Instance-reweighted Adversarial Training

1 code implementation ICLR 2021 Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli

The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy.

Pointwise Binary Classification with Pairwise Confidence Comparisons

no code implementations5 Oct 2020 Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.

Classification General Classification

Stable Weight Decay Regularization

no code implementations28 Sep 2020 Zeke Xie, Issei Sato, Masashi Sugiyama

\citet{loshchilov2018decoupled} demonstrated that $L_{2}$ regularization is not identical to weight decay for adaptive gradient methods, such as Adaptive Momentum Estimation (Adam), and proposed Adam with Decoupled Weight Decay (AdamW).

Provably Consistent Partial-Label Learning

no code implementations NeurIPS 2020 Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama

Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.

Multi-class Classification Partial Label Learning

A One-step Approach to Covariate Shift Adaptation

no code implementations8 Jul 2020 Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama

A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.

Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels

no code implementations ICML 2020 Yu-Ting Chou, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

In weakly supervised learning, unbiased risk estimator(URE) is a powerful tool for training classifiers when training and test data are drawn from different distributions.

Adai: Separating the Effects of Adaptive Learning Rate and Momentum Inertia

1 code implementation29 Jun 2020 Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, Masashi Sugiyama

Specifically, we disentangle the effects of Adaptive Learning Rate and Momentum of the Adam dynamics on saddle-point escaping and minima selection.

Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent

1 code implementation21 Jun 2020 Mehdi Abbana Bennani, Thang Doan, Masashi Sugiyama

In this framework, we prove that OGD is robust to Catastrophic Forgetting then derive the first generalisation bound for SGD and OGD for Continual Learning.

Continual Learning Transfer Learning

Coupling-based Invertible Neural Networks Are Universal Diffeomorphism Approximators

no code implementations NeurIPS 2020 Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama

We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases.

Image Generation Representation Learning

Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring

no code implementations NeurIPS 2020 Taira Tsuchiya, Junya Honda, Masashi Sugiyama

We investigate finite stochastic partial monitoring, which is a general model for sequential learning with limited feedback.

Decision Making

LFD-ProtoNet: Prototypical Network Based on Local Fisher Discriminant Analysis for Few-shot Learning

no code implementations15 Jun 2020 Kei Mukaiyama, Issei Sato, Masashi Sugiyama

The prototypical network (ProtoNet) is a few-shot learning framework that performs metric learning and classification using the distance to prototype representations of each class.

Few-Shot Learning General Classification +1

Part-dependent Label Noise: Towards Instance-dependent Label Noise

1 code implementation NeurIPS 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama

Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

1 code implementation NeurIPS 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama

By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.

$γ$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a Robust Divergence Estimator

no code implementations13 Jun 2020 Masahiro Fujisawa, Takeshi Teshima, Issei Sato, Masashi Sugiyama

Approximate Bayesian computation (ABC) is a likelihood-free inference method that has been employed in various applications.

Pairwise Supervision Can Provably Elicit a Decision Boundary

no code implementations11 Jun 2020 Han Bao, Takuya Shimada, Liyuan Xu, Issei Sato, Masashi Sugiyama

A classifier built upon the representations is expected to perform well in downstream classification; however, little theory has been given in literature so far and thereby the relationship between similarity and classification has remained elusive.

Classification Contrastive Learning +3

Rethinking Importance Weighting for Deep Learning under Distribution Shift

1 code implementation NeurIPS 2020 Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama

Under distribution shift (DS) where the training data distribution differs from the test one, a powerful technique is importance weighting (IW) which handles DS in two separate steps: weight estimation (WE) estimates the test-over-training density ratio and weighted classification (WC) trains the classifier from weighted training data.

Calibrated Surrogate Losses for Adversarially Robust Classification

no code implementations28 May 2020 Han Bao, Clayton Scott, Masashi Sugiyama

Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns.

Classification General Classification +1

Learning from Aggregate Observations

1 code implementation NeurIPS 2020 Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama

We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals.

Classification General Classification +1

Do Public Datasets Assure Unbiased Comparisons for Registration Evaluation?

no code implementations20 Mar 2020 Jie Luo, Guangshen Ma, Sarah Frisken, Parikshit Juvekar, Nazim Haouchine, Zhe Xu, Yiming Xiao, Alexandra Golby, Patrick Codd, Masashi Sugiyama, William Wells III

In this study, we use the variogram to screen the manually annotated landmarks in two datasets used to benchmark registration in image-guided neurosurgeries.

Image Registration

Do We Need Zero Training Loss After Achieving Zero Training Error?

1 code implementation ICML 2020 Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama

We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.

Progressive Identification of True Labels for Partial-Label Learning

1 code implementation ICML 2020 Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.

Partial Label Learning Stochastic Optimization

Towards Mixture Proportion Estimation without Irreducibility

no code implementations10 Feb 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao

It is worthwhile to change the problem: we prove that if the assumption holds, our method will not affect anything; if the assumption does not hold, the bias from problem changing is less than the bias from violation of the irreducible assumption in the original problem.

Few-shot Domain Adaptation by Causal Mechanism Transfer

1 code implementation ICML 2020 Takeshi Teshima, Issei Sato, Masashi Sugiyama

We take the structural equations in causal modeling as an example and propose a novel DA method, which is shown to be useful both theoretically and experimentally.

Domain Adaptation

Learning from Noisy Similar and Dissimilar Data

no code implementations3 Feb 2020 Soham Dan, Han Bao, Masashi Sugiyama

We perform a detailed investigation of this problem under two realistic noise models and propose two algorithms to learn from noisy S-D data.

Binary Classification from Positive Data with Skewed Confidence

no code implementations29 Jan 2020 Kazuhiko Shinoda, Hirotaka Kaji, Masashi Sugiyama

Positive-confidence (Pconf) classification [Ishida et al., 2018] is a promising weakly-supervised learning method which trains a binary classifier only from positive data equipped with confidence.

Classification General Classification

Confidence Scores Make Instance-dependent Label-noise Learning Possible

no code implementations11 Jan 2020 Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

We find with the help of confidence scores, the transition distribution of each instance can be approximately estimated.

Learning with noisy labels

Learning with Multiple Complementary Labels

no code implementations ICML 2020 Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama

In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

no code implementations20 Nov 2019 Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.

Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning

1 code implementation EACL 2021 Alon Jacovi, Gang Niu, Yoav Goldberg, Masashi Sugiyama

We consider the situation in which a user has collected a small set of documents on a cohesive topic, and they want to retrieve additional documents on this topic from a large collection.

Information Retrieval

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach

no code implementations20 Oct 2019 Nan Lu, Tianyi Zhang, Gang Niu, Masashi Sugiyama

The recently proposed unlabeled-unlabeled (UU) classification method allows us to train a binary classifier only from two unlabeled datasets with different class priors.

Classification General Classification

A unified view of likelihood ratio and reparameterization gradients and an optimal importance sampling scheme

no code implementations14 Oct 2019 Paavo Parmas, Masashi Sugiyama

Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used throughout machine and reinforcement learning; however, they are usually explained as simple mathematical tricks without providing any insight into their nature.

reinforcement-learning

Learning Only from Relevant Keywords and Unlabeled Documents

no code implementations IJCNLP 2019 Nontawat Charoenphakdee, Jongyeong Lee, Yiping Jin, Dittaya Wanvarie, Masashi Sugiyama

We consider a document classification problem where document labels are absent but only relevant keywords of a target class and unlabeled documents are given.

Document Classification General Classification

Learning from Indirect Observations

1 code implementation10 Oct 2019 Yivan Zhang, Nontawat Charoenphakdee, Masashi Sugiyama

Weakly-supervised learning is a paradigm for alleviating the scarcity of labeled data by leveraging lower-quality but larger-scale supervision signals.

Reducing Overestimation Bias in Multi-Agent Domains Using Double Centralized Critics

2 code implementations3 Oct 2019 Johannes Ackermann, Volker Gabler, Takayuki Osa, Masashi Sugiyama

Finally, we investigate the application of multi-agent methods to high-dimensional robotic tasks and show that our approach can be used to learn decentralized policies in this domain.

Multi-agent Reinforcement Learning reinforcement-learning

Wildly Unsupervised Domain Adaptation and Its Powerful and Efficient Solution

no code implementations25 Sep 2019 Feng Liu, Jie Lu, Bo Han, Gang Niu, Guangquan Zhang, Masashi Sugiyama

Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD---we name it wildly UDA (WUDA).

Unsupervised Domain Adaptation Wildly Unsupervised Domain Adaptation

VILD: Variational Imitation Learning with Diverse-quality Demonstrations

no code implementations15 Sep 2019 Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama

However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs.

Continuous Control Imitation Learning

Constraint Learning for Control Tasks with Limited Duration Barrier Functions

no code implementations26 Aug 2019 Motoya Ohnishi, Gennaro Notomista, Masashi Sugiyama, Magnus Egerstedt

When deploying autonomous agents in unstructured environments over sustained periods of time, adaptability and robustness oftentimes outweigh optimality as a primary consideration.

Are Registration Uncertainty and Error Monotonically Associated

no code implementations21 Aug 2019 Jie Luo, Sarah Frisken, Duo Wang, Alexandra Golby, Masashi Sugiyama, William M. Wells III

Probabilistic image registration (PIR) methods provide measures of registration uncertainty, which could be a surrogate for assessing the registration error.

Image Registration

Classification from Triplet Comparison Data

1 code implementation24 Jul 2019 Zhenghang Cui, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama

Although learning from triplet comparison data has been considered in many applications, an important fundamental question of whether we can learn a classifier only from triplet comparison data has remained unanswered.

Classification General Classification +1

Direction Matters: On Influence-Preserving Graph Summarization and Max-cut Principle for Directed Graphs

no code implementations22 Jul 2019 Wenkai Xu, Gang Niu, Aapo Hyvärinen, Masashi Sugiyama

On the other hand, compressing the vertices while preserving the directed edge information provides a way to learn the small-scale representation of a directed graph.

Are Anchor Points Really Indispensable in Label-Noise Learning?

1 code implementation NeurIPS 2019 Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama

Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i. e., data points that belong to a specific class almost surely).

Learning with noisy labels

Uncoupled Regression from Pairwise Comparison Data

1 code implementation NeurIPS 2019 Liyuan Xu, Junya Honda, Gang Niu, Masashi Sugiyama

We propose two practical methods for uncoupled regression from pairwise comparison data and show that the learned regression model converges to the optimal model with the optimal parametric convergence rate when the target variable distributes uniformly.

Learning-To-Rank

Fast and Robust Rank Aggregation against Model Misspecification

no code implementations29 May 2019 Yuangang Pan, WeiJie Chen, Gang Niu, Ivor W. Tsang, Masashi Sugiyama

In rank aggregation, preferences from different users are summarized into a total order under the homogeneous data assumption.

Bayesian Inference

Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification

no code implementations29 May 2019 Han Bao, Masashi Sugiyama

A clue to tackle their direct optimization is a calibrated surrogate utility, which is a tractable lower bound of the true utility function representing a given metric.

Classification General Classification +2

Solving NP-Hard Problems on Graphs with Extended AlphaGo Zero

2 code implementations28 May 2019 Kenshin Abe, Zijian Xu, Issei Sato, Masashi Sugiyama

There have been increasing challenges to solve combinatorial optimization problems by machine learning.

Combinatorial Optimization Q-Learning

Butterfly: One-step Approach towards Wildly Unsupervised Domain Adaptation

1 code implementation19 May 2019 Feng Liu, Jie Lu, Bo Han, Gang Niu, Guangquan Zhang, Masashi Sugiyama

Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD -- we name it wildly UDA (WUDA).

Unsupervised Domain Adaptation Wildly Unsupervised Domain Adaptation

Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization

no code implementations26 Apr 2019 Takuya Shimada, Han Bao, Issei Sato, Masashi Sugiyama

In this paper, we derive an unbiased risk estimator which can handle all of similarities/dissimilarities and unlabeled data.

General Classification

A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision

no code implementations ICLR Workshop LLD 2019 Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.

Meta-Learning Multi-Label Learning

Zero-shot Domain Adaptation Based on Attribute Information

no code implementations13 Mar 2019 Masato Ishii, Takashi Takenouchi, Masashi Sugiyama

In this paper, we propose a novel domain adaptation method that can be applied without target data.

Domain Adaptation

Polynomial-time Algorithms for Multiple-arm Identification with Full-bandit Feedback

no code implementations27 Feb 2019 Yuko Kuroki, Liyuan Xu, Atsushi Miyauchi, Junya Honda, Masashi Sugiyama

Based on our approximation algorithm, we propose novel bandit algorithms for the top-k selection problem, and prove that our algorithms run in polynomial time.

Online Multiclass Classification Based on Prediction Margin for Partial Feedback

no code implementations4 Feb 2019 Takuo Kaneko, Issei Sato, Masashi Sugiyama

We consider the problem of online multiclass classification with partial feedback, where an algorithm predicts a class for a new instance in each round and only receives its correctness.

Classification General Classification +1

Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization

no code implementations31 Jan 2019 Taira Tsuchiya, Nontawat Charoenphakdee, Issei Sato, Masashi Sugiyama

We further provide an estimation error bound to show that our risk estimator is consistent.

New Tricks for Estimating Gradients of Expectations

no code implementations31 Jan 2019 Christian J. Walder, Paul Roussel, Richard Nock, Cheng Soon Ong, Masashi Sugiyama

We introduce a family of pairwise stochastic gradient estimators for gradients of expectations, which are related to the log-derivative trick, but involve pairwise interactions between samples.

On the Calibration of Multiclass Classification with Rejection

1 code implementation NeurIPS 2019 Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama

First, we consider an approach based on simultaneous training of a classifier and a rejector, which achieves the state-of-the-art performance in the binary case.

Classification General Classification

Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative

no code implementations29 Jan 2019 Miao Xu, Bingcong Li, Gang Niu, Bo Han, Masashi Sugiyama

May there be a new sample selection method that can outperform the latest importance reweighting method in the deep learning age?

Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric

1 code implementation28 Jan 2019 Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama, Myunghee Cho Paik

We consider the problem of learning a binary classifier from only positive and unlabeled observations (called PU learning).

Hyperparameter Optimization

Active Deep Q-learning with Demonstration

no code implementations6 Dec 2018 Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama

In this work, we propose Active Reinforcement Learning with Demonstration (ARLD), a new framework to streamline RL in terms of demonstration efforts by allowing the RL agent to query for demonstration actively during training.

Q-Learning reinforcement-learning

Complementary-Label Learning for Arbitrary Losses and Models

1 code implementation Proceedings of the 36th International Conference on Machine Learning, 2019 Takashi Ishida, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama

In contrast to the standard classification paradigm where the true class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to.

General Classification Image Classification

Classification from Positive, Unlabeled and Biased Negative Data

1 code implementation ICLR 2019 Yu-Guan Hsieh, Gang Niu, Masashi Sugiyama

In binary classification, there are situations where negative (N) data are too diverse to be fully labeled and we often resort to positive-unlabeled (PU) learning in these scenarios.

Classification General Classification

SIGUA: Forgetting May Make Learning with Noisy Labels More Robust

1 code implementation ICML 2020 Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor Tsang, Masashi Sugiyama

Given data with noisy labels, over-parameterized deep networks can gradually memorize the data, and fit everything in the end.

Learning with noisy labels

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels

no code implementations27 Sep 2018 Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama

To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.

Improving Generative Adversarial Imitation Learning with Non-expert Demonstrations

no code implementations27 Sep 2018 Voot Tangkaratt, Masashi Sugiyama

Imitation learning aims to learn an optimal policy from expert demonstrations and its recent combination with deep learning has shown impressive performance.

Continuous Control Imitation Learning

Positive-Unlabeled Classification under Class Prior Shift and Asymmetric Error

no code implementations19 Sep 2018 Nontawat Charoenphakdee, Masashi Sugiyama

Based on the analysis of the Bayes optimal classifier, we show that given a test class prior, PU classification under class prior shift is equivalent to PU classification with asymmetric error.

Classification Density Ratio Estimation +1

Alternate Estimation of a Classifier and the Class-Prior from Positive and Unlabeled Data

no code implementations15 Sep 2018 Masahiro Kato, Liyuan Xu, Gang Niu, Masashi Sugiyama

In this paper, we propose a novel unified approach to estimating the class-prior and training a classifier alternately.

Dueling Bandits with Qualitative Feedback

no code implementations14 Sep 2018 Liyuan Xu, Junya Honda, Masashi Sugiyama

We formulate and study a novel multi-armed bandit problem called the qualitative dueling bandit (QDB) problem, where an agent observes not numeric but qualitative feedback by pulling each arm.

Clipped Matrix Completion: A Remedy for Ceiling Effects

no code implementations13 Sep 2018 Takeshi Teshima, Miao Xu, Issei Sato, Masashi Sugiyama

On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion.

Matrix Completion Recommendation Systems

Unsupervised Domain Adaptation Based on Source-guided Discrepancy

no code implementations11 Sep 2018 Seiichi Kuroki, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, Masashi Sugiyama

A previously proposed discrepancy that does not use the source domain labels requires high computational cost to estimate and may lead to a loose generalization error bound in the target domain.

Unsupervised Domain Adaptation

On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data

1 code implementation ICLR 2019 Nan Lu, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama

In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM.

Continuous-time Value Function Approximation in Reproducing Kernel Hilbert Spaces

no code implementations NeurIPS 2018 Motoya Ohnishi, Masahiro Yukawa, Mikael Johansson, Masashi Sugiyama

Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf.

Atari Games Gaussian Processes +1

Matrix Co-completion for Multi-label Classification with Missing Features and Labels

no code implementations23 May 2018 Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama

We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.

General Classification Matrix Completion +1

Masking: A New Perspective of Noisy Supervision

2 code implementations NeurIPS 2018 Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya zhang, Masashi Sugiyama

It is important to learn various types of classifiers given training data with noisy labels.

Ranked #29 on Image Classification on Clothing1M (using extra training data)

Image Classification

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

5 code implementations NeurIPS 2018 Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Learning with noisy labels

On the Applicability of Registration Uncertainty

no code implementations14 Mar 2018 Jie Luo, Alireza Sedghi, Karteek Popuri, Dana Cobzas, Miaomiao Zhang, Frank Preiswerk, Matthew Toews, Alexandra Golby, Masashi Sugiyama, William M. Wells III, Sarah Frisken

For probabilistic image registration (PIR), the predominant way to quantify the registration uncertainty is using summary statistics of the distribution of transformation parameters.

Image Registration

Uplift Modeling from Separate Labels

1 code implementation NeurIPS 2018 Ikko Yamane, Florian Yger, Jamal Atif, Masashi Sugiyama

Uplift modeling is aimed at estimating the incremental impact of an action on an individual's behavior, which is useful in various application domains such as targeted marketing (advertisement campaigns) and personalized medicine (medical treatments).

Binary Matrix Completion Using Unobserved Entries

no code implementations13 Mar 2018 Masayoshi Hayashi, Tomoya Sakai, Masashi Sugiyama

In this paper, motivated by a semi-supervised classification method recently proposed by Sakai et al. (2017), we develop a method for the BMC problem which can use all of positive, negative, and unobserved entries, by combining the risks of Davenport et al. (2014) and Hsieh et al. (2015).

General Classification Matrix Completion +1

Variational Inference for Gaussian Process with Panel Count Data

no code implementations12 Mar 2018 Hongyi Ding, Young Lee, Issei Sato, Masashi Sugiyama

We present the first framework for Gaussian-process-modulated Poisson processes when the temporal data appear in the form of panel counts.

Variational Inference

Active Feature Acquisition with Supervised Matrix Completion

no code implementations15 Feb 2018 Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, Songcan Chen

Feature missing is a serious problem in many applications, which may lead to low quality of training data and further significantly degrade the learning performance.

Matrix Completion

Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model

1 code implementation ICML 2018 Hideaki Imamura, Issei Sato, Masashi Sugiyama

In this paper, we derive a minimax error rate under more practical setting for a broader class of crowdsourcing models including the DS model as a special case.

Classification from Pairwise Similarity and Unlabeled Data

2 code implementations ICML 2018 Han Bao, Gang Niu, Masashi Sugiyama

Supervised learning needs a huge amount of labeled data, which can be a big bottleneck under the situation where there is a privacy concern or labeling cost is high.

Classification General Classification

Gaussian Process Classification with Privileged Information by Soft-to-Hard Labeling Transfer

no code implementations12 Feb 2018 Ryosuke Kamesawa, Issei Sato, Masashi Sugiyama

A state-of-the-art method of Gaussian process classification (GPC) with privileged information is GPC+, which incorporates privileged information into a noise term of the likelihood.

Gaussian Processes General Classification +1

Hierarchical Policy Search via Return-Weighted Density Estimation

no code implementations28 Nov 2017 Takayuki Osa, Masashi Sugiyama

Learning an optimal policy from a multi-modal reward function is a challenging problem in reinforcement learning (RL).

Density Estimation Motion Planning

Variational Inference based on Robust Divergences

1 code implementation18 Oct 2017 Futoshi Futami, Issei Sato, Masashi Sugiyama

In this paper, based on Zellner's optimization and variational formulation of Bayesian inference, we propose an outlier-robust pseudo-Bayesian variational method by replacing the Kullback-Leibler divergence used for data fitting to a robust divergence such as the beta- and gamma-divergences.

Bayesian Inference Variational Inference

Good Arm Identification via Bandit Feedback

no code implementations17 Oct 2017 Hideaki Kano, Junya Honda, Kentaro Sakamaki, Kentaro Matsuura, Atsuyoshi Nakamura, Masashi Sugiyama

We consider a novel stochastic multi-armed bandit problem called {\em good arm identification} (GAI), where a good arm is defined as an arm with expected reward greater than or equal to a given threshold.

Fully adaptive algorithm for pure exploration in linear bandits

no code implementations16 Oct 2017 Liyuan Xu, Junya Honda, Masashi Sugiyama

We propose the first fully-adaptive algorithm for pure exploration in linear bandits---the task to find the arm with the largest expected reward, which depends on an unknown parameter linearly.

Information-Theoretic Representation Learning for Positive-Unlabeled Classification

no code implementations15 Oct 2017 Tomoya Sakai, Gang Niu, Masashi Sugiyama

Recent advances in weakly supervised classification allow us to train a classifier only from positive and unlabeled (PU) data.

Classification Dimensionality Reduction +3

Mode-Seeking Clustering and Density Ridge Estimation via Direct Estimation of Density-Derivative-Ratios

no code implementations6 Jul 2017 Hiroaki Sasaki, Takafumi Kanamori, Aapo Hyvärinen, Gang Niu, Masashi Sugiyama

Based on the proposed estimator, novel methods both for mode-seeking clustering and density ridge estimation are developed, and the respective convergence rates to the mode and ridge of the underlying density are also established.

Density Estimation

Expectation Propagation for t-Exponential Family Using Q-Algebra

no code implementations NeurIPS 2017 Futoshi Futami, Issei Sato, Masashi Sugiyama

Exponential family distributions are highly useful in machine learning since their calculation can be performed efficiently through natural parameters.

Learning from Complementary Labels

1 code implementation NeurIPS 2017 Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama

Collecting complementary labels would be less laborious than collecting ordinary labels, since users do not have to carefully choose the correct class from a long list of candidate classes.

Classification General Classification +1

Guide Actor-Critic for Continuous Control

1 code implementation ICLR 2018 Voot Tangkaratt, Abbas Abdolmaleki, Masashi Sugiyama

First, we show that GAC updates the guide actor by performing second-order optimization in the action space where the curvature matrix is based on the Hessians of the critic.

Continuous Control reinforcement-learning

Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling

1 code implementation19 May 2017 Hongyi Ding, Mohammad Emtiyaz Khan, Issei Sato, Masashi Sugiyama

We model the intensity of each sequence as an infinite mixture of latent functions, each of which is obtained using a function drawn from a Gaussian process.

Variational Inference

Semi-Supervised AUC Optimization based on Positive-Unlabeled Learning

no code implementations4 May 2017 Tomoya Sakai, Gang Niu, Masashi Sugiyama

Maximizing the area under the receiver operating characteristic curve (AUC) is a standard approach to imbalanced classification.

imbalanced classification

Stochastic Divergence Minimization for Biterm Topic Model

no code implementations1 May 2017 Zhenghang Cui, Issei Sato, Masashi Sugiyama

As the emergence and the thriving development of social networks, a huge number of short texts are accumulated and need to be processed.

Topic Models Variational Inference

Misdirected Registration Uncertainty

no code implementations26 Apr 2017 Jie Luo, Karteek Popuri, Dana Cobzas, Hongyi Ding, William M. Wells III, Masashi Sugiyama

Since the transformation is such an essential component of registration, most existing researches conventionally quantify the registration uncertainty, which is the confidence in the estimated spatial correspondences, by the transformation uncertainty.

Image Registration Medical Image Registration

Convex Formulation of Multiple Instance Learning from Positive and Unlabeled Bags

1 code implementation22 Apr 2017 Han Bao, Tomoya Sakai, Issei Sato, Masashi Sugiyama

Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available.

Content-Based Image Retrieval Multiple Instance Learning +1

Positive-Unlabeled Learning with Non-Negative Risk Estimator

1 code implementation NeurIPS 2017 Ryuichi Kiryo, Gang Niu, Marthinus C. Du Plessis, Masashi Sugiyama

From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning.

Learning Discrete Representations via Information Maximizing Self-Augmented Training

2 code implementations ICML 2017 Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama

Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation.

Ranked #3 on Unsupervised Image Classification on SVHN (using extra training data)

Data Augmentation Unsupervised Image Classification

Policy Search with High-Dimensional Context Variables

no code implementations10 Nov 2016 Voot Tangkaratt, Herke van Hoof, Simone Parisi, Gerhard Neumann, Jan Peters, Masashi Sugiyama

A naive application of unsupervised dimensionality reduction methods to the context variables, such as principal component analysis, is insufficient as task-relevant input may be ignored.

Dimensionality Reduction

Does Distributionally Robust Supervised Learning Give Robust Classifiers?

no code implementations ICML 2018 Weihua Hu, Gang Niu, Issei Sato, Masashi Sugiyama

Since the DRSL is explicitly formulated for a distribution shift scenario, we naturally expect it to give a robust classifier that can aggressively handle shifted distributions.

General Classification

Class-prior Estimation for Learning from Positive and Unlabeled Data

no code implementations5 Nov 2016 Marthinus C. du Plessis, Gang Niu, Masashi Sugiyama

Under the assumption that an additional labeled dataset is available, the class prior can be estimated by fitting a mixture of class-wise data distributions to the unlabeled data distribution.

Geometry-aware stationary subspace analysis

no code implementations25 May 2016 Inbal Horev, Florian Yger, Masashi Sugiyama

The classic SSA method finds a matrix that projects the data onto a stationary subspace by optimizing a cost function based on a matrix divergence.

Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data

no code implementations ICML 2017 Tomoya Sakai, Marthinus Christoffel du Plessis, Gang Niu, Masashi Sugiyama

Most of the semi-supervised classification methods developed so far use unlabeled data for regularization purposes under particular distributional assumptions such as the cluster assumption.

Classification General Classification

Reinterpreting the Transformation Posterior in Probabilistic Image Registration

no code implementations7 Apr 2016 Jie Luo, Karteek Popuri, Dana Cobzas, Hongyi Ding, Masashi Sugiyama

Meanwhile, summary statistics of the posterior are employed to evaluate the registration uncertainty, that is the trustworthiness of the registered image.

Image Registration

Whitening-Free Least-Squares Non-Gaussian Component Analysis

1 code implementation3 Mar 2016 Hiroaki Shiino, Hiroaki Sasaki, Gang Niu, Masashi Sugiyama

Non-Gaussian component analysis (NGCA) is an unsupervised linear dimension reduction method that extracts low-dimensional non-Gaussian "signals" from high-dimensional data contaminated with Gaussian noise.

Dimensionality Reduction

Non-Gaussian Component Analysis with Log-Density Gradient Estimation

no code implementations28 Jan 2016 Hiroaki Sasaki, Gang Niu, Masashi Sugiyama

Non-Gaussian component analysis (NGCA) is aimed at identifying a linear subspace such that the projected data follows a non-Gaussian distribution.

Condition for Perfect Dimensionality Recovery by Variational Bayesian PCA

1 code implementation15 Dec 2015 Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. Derin Babacan

In this paper, we clarify the behavior of VB learning in probabilistic PCA (or fully-observed matrix factorization).