Search Results for author: Nan Lu

Found 12 papers, 5 papers with code

Electrical Load Forecasting Model Using Hybrid LSTM Neural Networks with Online Correction

no code implementations6 Mar 2024 Nan Lu, Quan Ouyang, Yang Li, Changfu Zou

Accurate electrical load forecasting is of great importance for the efficient operation and control of modern power systems.

Load Forecasting Time Series

A General Framework for Learning under Corruption: Label Noise, Attribute Noise, and Beyond

no code implementations17 Jul 2023 Laura Iacovissi, Nan Lu, Robert C. Williamson

Corruption is frequently observed in collected data and has been extensively studied in machine learning under different corruption models.

Attribute

Multi-class Classification from Multiple Unlabeled Datasets with Partial Risk Regularization

1 code implementation4 Jul 2022 Yuting Tang, Nan Lu, Tianyi Zhang, Masashi Sugiyama

Recent years have witnessed a great success of supervised deep learning, where predictive models were trained from a large amount of fully labeled data.

Multi-class Classification

Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients

1 code implementation7 Apr 2022 Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.

Federated Learning

Rethinking Importance Weighting for Transfer Learning

no code implementations19 Dec 2021 Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama

A key assumption in supervised learning is that training and test data follow the same probability distribution.

Selection bias Transfer Learning

Unsupervised Federated Learning is Possible

no code implementations ICLR 2022 Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.

Federated Learning

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification

1 code implementation1 Feb 2021 Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama

SSC can be solved by a standard (multi-class) classification method, and we use the SSC solution to obtain the final binary classifier through a certain linear-fractional transformation.

Binary Classification Classification +2

Pointwise Binary Classification with Pairwise Confidence Comparisons

no code implementations5 Oct 2020 Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.

Binary Classification Classification +2

A One-step Approach to Covariate Shift Adaptation

no code implementations8 Jul 2020 Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama

A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.

Rethinking Importance Weighting for Deep Learning under Distribution Shift

1 code implementation NeurIPS 2020 Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama

Under distribution shift (DS) where the training data distribution differs from the test one, a powerful technique is importance weighting (IW) which handles DS in two separate steps: weight estimation (WE) estimates the test-over-training density ratio and weighted classification (WC) trains the classifier from weighted training data.

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach

no code implementations20 Oct 2019 Nan Lu, Tianyi Zhang, Gang Niu, Masashi Sugiyama

The recently proposed unlabeled-unlabeled (UU) classification method allows us to train a binary classifier only from two unlabeled datasets with different class priors.

Classification General Classification

On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data

1 code implementation ICLR 2019 Nan Lu, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama

In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM.

Cannot find the paper you are looking for? You can Submit a new open access paper.