Search Results for author: Miao Xu

Found 30 papers, 9 papers with code

Speedup Matrix Completion with Side Information: Application to Multi-Label Learning

no code implementations NeurIPS 2013 Miao Xu, Rong Jin, Zhi-Hua Zhou

In standard matrix completion theory, it is required to have at least $O(n\ln^2 n)$ observed entries to perfectly recover a low-rank matrix $M$ of size $n\times n$, leading to a large number of observations when $n$ is large.

Matrix Completion Multi-Label Learning

CUR Algorithm for Partially Observed Matrices

no code implementations4 Nov 2014 Miao Xu, Rong Jin, Zhi-Hua Zhou

In particular, the proposed algorithm computes the low rank approximation of the target matrix based on (i) the randomly sampled rows and columns, and (ii) a subset of observed entries that are randomly sampled from the matrix.

Matrix Completion

Active Feature Acquisition with Supervised Matrix Completion

no code implementations15 Feb 2018 Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, Songcan Chen

Feature missing is a serious problem in many applications, which may lead to low quality of training data and further significantly degrade the learning performance.

Matrix Completion

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

5 code implementations NeurIPS 2018 Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Learning with noisy labels Memorization

Matrix Co-completion for Multi-label Classification with Missing Features and Labels

no code implementations23 May 2018 Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama

We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.

General Classification Matrix Completion +1

Clipped Matrix Completion: A Remedy for Ceiling Effects

no code implementations13 Sep 2018 Takeshi Teshima, Miao Xu, Issei Sato, Masashi Sugiyama

On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion.

Matrix Completion Recommendation Systems

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels

no code implementations27 Sep 2018 Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama

To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.

Memorization

Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative

no code implementations29 Jan 2019 Miao Xu, Bingcong Li, Gang Niu, Bo Han, Masashi Sugiyama

May there be a new sample selection method that can outperform the latest importance reweighting method in the deep learning age?

Memorization

A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision

no code implementations ICLR Workshop LLD 2019 Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.

Meta-Learning Multi-Label Learning +1

Progressive Identification of True Labels for Partial-Label Learning

1 code implementation ICML 2020 Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.

Partial Label Learning Stochastic Optimization +1

Provably Consistent Partial-Label Learning

no code implementations NeurIPS 2020 Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama

Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.

Multi-class Classification Partial Label Learning

Learning to Infer User Hidden States for Online Sequential Advertising

no code implementations3 Sep 2020 Zhaoqing Peng, Junqi Jin, Lan Luo, Yaodong Yang, Rui Luo, Jun Wang, Wei-Nan Zhang, Haiyang Xu, Miao Xu, Chuan Yu, Tiejian Luo, Han Li, Jian Xu, Kun Gai

To drive purchase in online advertising, it is of the advertiser's great interest to optimize the sequential advertising strategy whose performance and interpretability are both important.

Pointwise Binary Classification with Pairwise Confidence Comparisons

no code implementations5 Oct 2020 Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.

Binary Classification Classification +2

Optimizing Multiple Performance Metrics with Deep GSP Auctions for E-commerce Advertising

no code implementations5 Dec 2020 Zhilin Zhang, Xiangyu Liu, Zhenzhe Zheng, Chenrui Zhang, Miao Xu, Junwei Pan, Chuan Yu, Fan Wu, Jian Xu, Kun Gai

In e-commerce advertising, the ad platform usually relies on auction mechanisms to optimize different performance metrics, such as user experience, advertiser utility, and platform revenue.

Particle-hole asymmetric superconducting coherence peaks in overdoped cuprates

no code implementations10 Mar 2021 Changwei Zou, Zhenqi Hao, Xiangyu Luo, Shusen Ye, Qiang Gao, Xintong Li, Miao Xu, Peng Cai, Chengtian Lin, Xingjiang Zhou, Dung-Hai Lee, Yayu Wang

To elucidate the superconductor to metal transition at the end of superconducting dome, the overdoped regime has stepped onto the center stage of cuprate research recently.

Superconductivity

On the Robustness of Average Losses for Partial-Label Learning

no code implementations11 Jun 2021 Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).

Partial Label Learning Weakly Supervised Classification

Active Refinement for Multi-Label Learning: A Pseudo-Label Approach

no code implementations29 Sep 2021 Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts.

Active Learning Multi-Label Learning +1

Personalized On-Device E-health Analytics with Decentralized Block Coordinate Descent

no code implementations17 Dec 2021 Guanhua Ye, Hongzhi Yin, Tong Chen, Miao Xu, Quoc Viet Hung Nguyen, Jiangning Song

Actuated by the growing attention to personal healthcare and the pandemic, the popularity of E-health is proliferating.

Benchmarking Fairness +1

Towards 3D Face Reconstruction in Perspective Projection: Estimating 6DoF Face Pose from Monocular Image

1 code implementation9 May 2022 Yueying Kao, Bowen Pan, Miao Xu, Jiangjing Lyu, Xiangyu Zhu, Yuanzhang Chang, Xiaobo Li, Zhen Lei

In 3D face reconstruction, orthogonal projection has been widely employed to substitute perspective projection to simplify the fitting process.

3D Face Reconstruction

A Boosting Algorithm for Positive-Unlabeled Learning

no code implementations19 May 2022 Yawen Zhao, Mingzhe Zhang, Chenhao Zhang, Weitong Chen, Nan Ye, Miao Xu

This is because AdaPU learns a weak classifier and its weight using a weighted positive-negative (PN) dataset with some negative data weights $-$ the dataset is derived from the original PU data, and the data weights are determined by the current weighted classifier combination, but some data weights are negative.

Action Detection Activity Detection +1

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer

no code implementations13 Aug 2022 Tong Wang, Yuan YAO, Feng Xu, Miao Xu, Shengwei An, Ting Wang

Existing defenses are mainly built upon the observation that the backdoor trigger is usually of small size or affects the activation of only a few neurons.

Backdoor Attack backdoor defense

Positive-Unlabeled Learning using Random Forests via Recursive Greedy Risk Minimization

1 code implementation16 Oct 2022 Jonathan Wilton, Abigail M. Y. Koay, Ryan K. L. Ko, Miao Xu, Nan Ye

Key to our approach is a new interpretation of decision tree algorithms for positive and negative data as \emph{recursive greedy risk minimization algorithms}.

Feature Importance Weakly Supervised Classification

Complementary to Multiple Labels: A Correlation-Aware Correction Approach

no code implementations25 Feb 2023 Yi Gao, Miao Xu, Min-Ling Zhang

Moreover, theoretical findings reveal that calculating a transition matrix from label correlations in \textit{multi-labeled CLL} (ML-CLL) needs multi-labeled data, while this is unavailable for ML-CLL.

Binary Classification

CaMU: Disentangling Causal Effects in Deep Model Unlearning

1 code implementation30 Jan 2024 Shaofei Shen, Chenhao Zhang, Alina Bialkowski, Weitong Chen, Miao Xu

To address this shortcoming, the present study undertakes a causal analysis of the unlearning and introduces a novel framework termed Causal Machine Unlearning (CaMU).

Machine Unlearning

MEBS: Multi-task End-to-end Bid Shading for Multi-slot Display Advertising

no code implementations5 Mar 2024 Zhen Gong, Lvyin Niu, Yang Zhao, Miao Xu, Zhenzhe Zheng, Haoqi Zhang, Zhilin Zhang, Fan Wu, Rongquan Bai, Chuan Yu, Jian Xu, Bo Zheng

Through extensive offline and online experiments, we demonstrate the effectiveness and efficiency of our method, and we obtain a 7. 01% lift in Gross Merchandise Volume, a 7. 42% lift in Return on Investment, and a 3. 26% lift in ad buy count.

Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models

1 code implementation31 Mar 2024 Shaofei Shen, Chenhao Zhang, Yawen Zhao, Alina Bialkowski, Weitong Chen, Miao Xu

Leveraging this approximation, we adapt the original model to eliminate information from the forgotten data at the representation level.

Machine Unlearning

Cannot find the paper you are looking for? You can Submit a new open access paper.