Search Results for author: Jiaheng Wei

Found 14 papers, 5 papers with code

Measuring and Reducing LLM Hallucination without Gold-Standard Answers via Expertise-Weighting

no code implementations16 Feb 2024 Jiaheng Wei, Yuanshun Yao, Jean-Francois Ton, Hongyi Guo, Andrew Estornell, Yang Liu

In this work, we propose Factualness Evaluations via Weighting LLMs (FEWL), the first hallucination metric that is specifically designed for the scenario when gold-standard answers are absent.

Hallucination In-Context Learning

Human-Instruction-Free LLM Self-Alignment with Limited Samples

no code implementations6 Jan 2024 Hongyi Guo, Yuanshun Yao, Wei Shen, Jiaheng Wei, Xiaoying Zhang, Zhaoran Wang, Yang Liu

The key idea is to first retrieve high-quality samples related to the target domain and use them as In-context Learning examples to generate more samples.

In-Context Learning Instruction Following

Distributionally Robust Post-hoc Classifiers under Prior Shifts

1 code implementation16 Sep 2023 Jiaheng Wei, Harikrishna Narasimhan, Ehsan Amid, Wen-Sheng Chu, Yang Liu, Abhishek Kumar

We investigate the problem of training models that are robust to shifts caused by changes in the distribution of class-priors or group-priors.

Client-side Gradient Inversion Against Federated Learning from Poisoning

no code implementations14 Sep 2023 Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Chao Chen, Shirui Pan, Kok-Leong Ong, Jun Zhang, Yang Xiang

For the first time, we show the feasibility of a client-side adversary with limited knowledge being able to recover the training samples from the aggregated global model.

Federated Learning

Do humans and machines have the same eyes? Human-machine perceptual differences on image classification

no code implementations18 Apr 2023 Minghao Liu, Jiaheng Wei, Yang Liu, James Davis

Trained computer vision models are assumed to solve vision tasks by imitating human behavior learned from training labels.

Image Classification

Fairness Improves Learning from Noisily Labeled Long-Tailed Data

no code implementations22 Mar 2023 Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu

Both long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.

Fairness

To Aggregate or Not? Learning with Separate Noisy Labels

no code implementations14 Jun 2022 Jiaheng Wei, Zhaowei Zhu, Tianyi Luo, Ehsan Amid, Abhishek Kumar, Yang Liu

The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e. g., via crowdsourcing).

Learning with noisy labels

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

2 code implementations ICLR 2022 Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu

These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.

Benchmarking Learning with noisy labels +1

Understanding Generalized Label Smoothing when Learning with Noisy Labels

no code implementations29 Sep 2021 Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Yang Liu

It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model.

Learning with noisy labels

DuelGAN: A Duel Between Two Discriminators Stabilizes the GAN Training

no code implementations19 Jan 2021 Jiaheng Wei, Minghao Liu, Jiahao Luo, Andrew Zhu, James Davis, Yang Liu

In this paper, we introduce DuelGAN, a generative adversarial network (GAN) solution to improve the stability of the generated samples and to mitigate mode collapse.

Generative Adversarial Network Image Generation +1

When Optimizing $f$-divergence is Robust with Label Noise

2 code implementations ICLR 2021 Jiaheng Wei, Yang Liu

We show when maximizing a properly defined $f$-divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise.

Learning with noisy labels

Incentives for Federated Learning: a Hypothesis Elicitation Approach

no code implementations21 Jul 2020 Yang Liu, Jiaheng Wei

The success of a credible federated learning system builds on the assumption that the decentralized and self-interested users will be willing to participate to contribute their local models in a trustworthy way.

BIG-bench Machine Learning Federated Learning

Sample Elicitation

1 code implementation8 Oct 2019 Jiaheng Wei, Zuyue Fu, Yang Liu, Xingyu Li, Zhuoran Yang, Zhaoran Wang

We also show a connection between this sample elicitation problem and $f$-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.