Search Results for author: Jiaheng Wei

Found 20 papers, 6 papers with code

Reassessing Layer Pruning in LLMs: New Insights and Methods

1 code implementation23 Nov 2024 Yao Lu, Hao Cheng, Yujie Fang, Zeyu Wang, Jiaheng Wei, Dongwei Xu, Qi Xuan, Xiaoniu Yang, Zhaowei Zhu

Layer pruning, as a simple yet effective compression method, removes layers of a model directly, reducing computational overhead.

Benchmarking

LLM Unlearning via Loss Adjustment with Only Forget Data

no code implementations14 Oct 2024 Yaxuan Wang, Jiaheng Wei, Chris Yuhao Liu, Jinlong Pang, Quan Liu, Ankit Parag Shah, Yujia Bao, Yang Liu, Wei Wei

Existing approaches to LLM unlearning often rely on retain data or a reference LLM, yet they struggle to adequately balance unlearning performance with overall model utility.

Improving Data Efficiency via Curating LLM-Driven Rating Systems

no code implementations9 Oct 2024 Jinlong Pang, Jiaheng Wei, Ankit Parag Shah, Zhaowei Zhu, Yaxuan Wang, Chen Qian, Yang Liu, Yujia Bao, Wei Wei

Instruction tuning is critical for adapting large language models (LLMs) to downstream tasks, and recent studies have demonstrated that small amounts of human-curated data can outperform larger datasets, challenging traditional data scaling laws.

Diversity

Memorization in deep learning: A survey

no code implementations6 Jun 2024 Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Ming Ding, Chao Chen, Kok-Leong Ong, Jun Zhang, Yang Xiang

Deep Learning (DL) powered by Deep Neural Networks (DNNs) has revolutionized various domains, yet understanding the intricacies of DNN decision-making and learning processes remains a significant challenge.

Decision Making Deep Learning +2

Human-Instruction-Free LLM Self-Alignment with Limited Samples

no code implementations6 Jan 2024 Hongyi Guo, Yuanshun Yao, Wei Shen, Jiaheng Wei, Xiaoying Zhang, Zhaoran Wang, Yang Liu

The key idea is to first retrieve high-quality samples related to the target domain and use them as In-context Learning examples to generate more samples.

In-Context Learning Instruction Following

Distributionally Robust Post-hoc Classifiers under Prior Shifts

1 code implementation16 Sep 2023 Jiaheng Wei, Harikrishna Narasimhan, Ehsan Amid, Wen-Sheng Chu, Yang Liu, Abhishek Kumar

We investigate the problem of training models that are robust to shifts caused by changes in the distribution of class-priors or group-priors.

Client-side Gradient Inversion Against Federated Learning from Poisoning

no code implementations14 Sep 2023 Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Chao Chen, Shirui Pan, Kok-Leong Ong, Jun Zhang, Yang Xiang

For the first time, we show the feasibility of a client-side adversary with limited knowledge being able to recover the training samples from the aggregated global model.

Federated Learning

Do humans and machines have the same eyes? Human-machine perceptual differences on image classification

no code implementations18 Apr 2023 Minghao Liu, Jiaheng Wei, Yang Liu, James Davis

Trained computer vision models are assumed to solve vision tasks by imitating human behavior learned from training labels.

Image Classification

Fairness Improves Learning from Noisily Labeled Long-Tailed Data

no code implementations22 Mar 2023 Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu

Both long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.

Fairness

To Aggregate or Not? Learning with Separate Noisy Labels

no code implementations14 Jun 2022 Jiaheng Wei, Zhaowei Zhu, Tianyi Luo, Ehsan Amid, Abhishek Kumar, Yang Liu

The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e. g., via crowdsourcing).

Learning with noisy labels

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

3 code implementations ICLR 2022 Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu

These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.

Benchmarking Learning with noisy labels +1

Understanding Generalized Label Smoothing when Learning with Noisy Labels

no code implementations29 Sep 2021 Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Yang Liu

It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model.

Learning with noisy labels

DuelGAN: A Duel Between Two Discriminators Stabilizes the GAN Training

no code implementations19 Jan 2021 Jiaheng Wei, Minghao Liu, Jiahao Luo, Andrew Zhu, James Davis, Yang Liu

In this paper, we introduce DuelGAN, a generative adversarial network (GAN) solution to improve the stability of the generated samples and to mitigate mode collapse.

Generative Adversarial Network Image Generation +1

When Optimizing $f$-divergence is Robust with Label Noise

2 code implementations ICLR 2021 Jiaheng Wei, Yang Liu

We show when maximizing a properly defined $f$-divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise.

Learning with noisy labels

Incentives for Federated Learning: a Hypothesis Elicitation Approach

no code implementations21 Jul 2020 Yang Liu, Jiaheng Wei

The success of a credible federated learning system builds on the assumption that the decentralized and self-interested users will be willing to participate to contribute their local models in a trustworthy way.

BIG-bench Machine Learning Federated Learning +1

Sample Elicitation

1 code implementation8 Oct 2019 Jiaheng Wei, Zuyue Fu, Yang Liu, Xingyu Li, Zhuoran Yang, Zhaoran Wang

We also show a connection between this sample elicitation problem and $f$-GAN, and how this connection can help reconstruct an estimator of the distribution based on collected samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.