no code implementations • 3 Jan 2025 • Zeke Xie, Zheng He, Nan Lu, Lichen Bai, Bao Li, Shuo Yang, Mingming Sun, Ping Li
Real-world data often contains intrinsic ambiguity that the common single-hard-label annotation paradigm ignores.
no code implementations • 6 Mar 2024 • Nan Lu, Quan Ouyang, Yang Li, Changfu Zou
Accurate electrical load forecasting is of great importance for the efficient operation and control of modern power systems.
no code implementations • 17 Jul 2023 • Laura Iacovissi, Nan Lu, Robert C. Williamson
We generalize the definition of corruption beyond the concept of distributional shift: corruption includes all modifications of a learning problem, including changes in model class and loss function.
1 code implementation • 4 Jul 2022 • Yuting Tang, Nan Lu, Tianyi Zhang, Masashi Sugiyama
Recent years have witnessed a great success of supervised deep learning, where predictive models were trained from a large amount of fully labeled data.
1 code implementation • 7 Apr 2022 • Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.
no code implementations • 19 Dec 2021 • Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama
A key assumption in supervised learning is that training and test data follow the same probability distribution.
no code implementations • ICLR 2022 • Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.
1 code implementation • 1 Feb 2021 • Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama
SSC can be solved by a standard (multi-class) classification method, and we use the SSC solution to obtain the final binary classifier through a certain linear-fractional transformation.
no code implementations • 5 Oct 2020 • Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama
To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.
no code implementations • 8 Jul 2020 • Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
1 code implementation • NeurIPS 2020 • Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama
Under distribution shift (DS) where the training data distribution differs from the test one, a powerful technique is importance weighting (IW) which handles DS in two separate steps: weight estimation (WE) estimates the test-over-training density ratio and weighted classification (WC) trains the classifier from weighted training data.
no code implementations • 20 Oct 2019 • Nan Lu, Tianyi Zhang, Gang Niu, Masashi Sugiyama
The recently proposed unlabeled-unlabeled (UU) classification method allows us to train a binary classifier only from two unlabeled datasets with different class priors.
1 code implementation • ICLR 2019 • Nan Lu, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama
In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM.