Search Results for author: Zhaowei Zhu

Found 20 papers, 11 papers with code

FedFixer: Mitigating Heterogeneous Label Noise in Federated Learning

no code implementations25 Mar 2024 Xinyuan Ji, Zhaowei Zhu, Wei Xi, Olga Gadyatskaya, Zilong Song, Yong Cai, Yang Liu

The high loss incurred by client-specific samples in heterogeneous label noise poses challenges for distinguishing between client-specific and noisy label samples, impacting the effectiveness of existing label noise learning approaches.

Federated Learning

Fair Classifiers Without Fair Training: An Influence-Guided Data Sampling Approach

no code implementations20 Feb 2024 Jinlong Pang, Jialu Wang, Zhaowei Zhu, Yuanshun Yao, Chen Qian, Yang Liu

A fair classifier should ensure the benefit of people from different groups, while the group information is often sensitive and unsuitable for model training.

Attribute Fairness

Unmasking and Improving Data Credibility: A Study with Datasets for Training Harmless Language Models

1 code implementation19 Nov 2023 Zhaowei Zhu, Jialu Wang, Hao Cheng, Yang Liu

Given the cost and difficulty of cleaning these datasets by humans, we introduce a systematic framework for evaluating the credibility of datasets, identifying label errors, and evaluating the influence of noisy labels in the curated language data, specifically focusing on unsafe comments and conversation classification.

Language Modelling

Fairness Improves Learning from Noisily Labeled Long-Tailed Data

no code implementations22 Mar 2023 Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu

Both long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.

Fairness

Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes

1 code implementation6 Oct 2022 Zhaowei Zhu, Yuanshun Yao, Jiankai Sun, Hang Li, Yang Liu

Our theoretical analyses show that directly using proxy models can give a false sense of (un)fairness.

Fairness

To Aggregate or Not? Learning with Separate Noisy Labels

no code implementations14 Jun 2022 Jiaheng Wei, Zhaowei Zhu, Tianyi Luo, Ehsan Amid, Abhishek Kumar, Yang Liu

The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e. g., via crowdsourcing).

Learning with noisy labels

Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features

2 code implementations2 Feb 2022 Zhaowei Zhu, Jialu Wang, Yang Liu

We observe that tasks with lower-quality features fail to meet the anchor-point or clusterability condition, due to the coexistence of both uninformative and informative representations.

text-classification Text Classification

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

2 code implementations ICLR 2022 Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu

These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.

Benchmarking Learning with noisy labels +1

Mitigating Memorization of Noisy Labels via Regularization between Representations

1 code implementation18 Oct 2021 Hao Cheng, Zhaowei Zhu, Xing Sun, Yang Liu

Designing robust loss functions is popular in learning with noisy labels while existing designs did not explicitly consider the overfitting property of deep neural networks (DNNs).

Learning with noisy labels Memorization +1

Detecting Corrupted Labels Without Training a Model to Predict

2 code implementations12 Oct 2021 Zhaowei Zhu, Zihao Dong, Yang Liu

In this paper, from a more data-centric perspective, we propose a training-free solution to detect corrupted labels.

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

1 code implementation ICLR 2022 Zhaowei Zhu, Tianyi Luo, Yang Liu

Semi-supervised learning (SSL) has demonstrated its potential to improve the model accuracy for a variety of learning tasks when the high-quality supervised data is severely limited.

Fairness Pseudo Label +2

A Good Representation Detects Noisy Labels

no code implementations29 Sep 2021 Zhaowei Zhu, Zihao Dong, Hao Cheng, Yang Liu

In this paper, given good representations, we propose a universally applicable and training-free solution to detect noisy labels.

A Second-Order Approach to Learning with Instance-Dependent Label Noise

1 code implementation CVPR 2021 Zhaowei Zhu, Tongliang Liu, Yang Liu

We first provide evidences that the heterogeneous instance-dependent label noise is effectively down-weighting the examples with higher noise rates in a non-uniform way and thus causes imbalances, rendering the strategy of directly applying methods for class-dependent label noise questionable.

Image Classification Image Classification with Label Noise

Federated Bandit: A Gossiping Approach

no code implementations24 Oct 2020 Zhaowei Zhu, Jingxuan Zhu, Ji Liu, Yang Liu

Motivated by the proposal of federated learning, we aim for a solution with which agents will never share their local observations with a central entity, and will be allowed to only share a private copy of his/her own information with their neighbors.

Federated Learning

Policy Learning Using Weak Supervision

1 code implementation NeurIPS 2021 Jingkang Wang, Hongyi Guo, Zhaowei Zhu, Yang Liu

Most existing policy learning solutions require the learning agents to receive high-quality supervision signals such as well-designed rewards in reinforcement learning (RL) or high-quality expert demonstrations in behavioral cloning (BC).

Reinforcement Learning (RL)

Learning with Instance-Dependent Label Noise: A Sample Sieve Approach

1 code implementation ICLR 2021 Hao Cheng, Zhaowei Zhu, Xingyu Li, Yifei Gong, Xing Sun, Yang Liu

This high-quality sample sieve allows us to treat clean examples and the corrupted ones separately in training a DNN solution, and such a separation is shown to be advantageous in the instance-dependent noise setting.

Image Classification with Label Noise Learning with noisy labels

Online optimal task offloading with one-bit feedback

no code implementations27 Jun 2018 Shangshu Zhao, Zhaowei Zhu, Fuqian Yang, Xiliang Luo

In this paper, we investigate a stochastic task offloading model and propose a multi-armed bandit framework to formulate this model.

Learn and Pick Right Nodes to Offload

no code implementations20 Apr 2018 Zhaowei Zhu, Ting Liu, Shengda Jin, Xiliang Luo

An effective task offloading strategy is needed to utilize the computational resources efficiently.

Cannot find the paper you are looking for? You can Submit a new open access paper.