Search Results for author: Yingchun Wang

Found 9 papers, 2 papers with code

Flames: Benchmarking Value Alignment of LLMs in Chinese

1 code implementation12 Nov 2023 Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin

The widespread adoption of large language models (LLMs) across various regions underscores the urgent need to evaluate their alignment with human values.

Benchmarking Fairness

Fake Alignment: Are LLMs Really Aligned Well?

1 code implementation10 Nov 2023 Yixu Wang, Yan Teng, Kexin Huang, Chengqi Lyu, Songyang Zhang, Wenwei Zhang, Xingjun Ma, Yu-Gang Jiang, Yu Qiao, Yingchun Wang

The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety.

Multiple-choice

SFP: Spurious Feature-targeted Pruning for Out-of-Distribution Generalization

no code implementations19 May 2023 Yingchun Wang, Jingcai Guo, Yi Liu, Song Guo, Weizhan Zhang, Xiangyong Cao, Qinghua Zheng

Based on the idea that in-distribution (ID) data with spurious features may have a lower experience risk, in this paper, we propose a novel Spurious Feature-targeted model Pruning framework, dubbed SFP, to automatically explore invariant substructures without referring to the above drawbacks.

Out-of-Distribution Generalization

Towards Fairer and More Efficient Federated Learning via Multidimensional Personalized Edge Models

no code implementations9 Feb 2023 Yingchun Wang, Jingcai Guo, Jie Zhang, Song Guo, Weizhan Zhang, Qinghua Zheng

Federated learning (FL) is an emerging technique that trains massive and geographically distributed edge data while maintaining privacy.

Computational Efficiency Fairness +1

Data Quality-aware Mixed-precision Quantization via Hybrid Reinforcement Learning

no code implementations9 Feb 2023 Yingchun Wang, Jingcai Guo, Song Guo, Weizhan Zhang

Mixed-precision quantization mostly predetermines the model bit-width settings before actual training due to the non-differential bit-width sampling process, obtaining sub-optimal performance.

Quantization reinforcement-learning +1

Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning

no code implementations19 Dec 2022 Yingchun Wang, Jingcai Guo, Song Guo, Weizhan Zhang, Jie Zhang

Recent studies show that even highly biased dense networks contain an unbiased substructure that can achieve better out-of-distribution (OOD) generalization than the original model.

Out-of-Distribution Generalization

Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning

no code implementations7 Dec 2022 Yingchun Wang, Song Guo, Jingcai Guo, Weizhan Zhang, Yida Xu, Jie Zhang, Yi Liu

Extensive experiments based on small Cifar-10 and large-scaled ImageNet demonstrate that our method can obtain sparser networks with great generalization performance while providing quantified reliability for the pruned model.

Network Pruning Variational Inference

Feature Correlation-guided Knowledge Transfer for Federated Self-supervised Learning

no code implementations14 Nov 2022 Yi Liu, Song Guo, Jie Zhang, Qihua Zhou, Yingchun Wang, Xiaohan Zhao

We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.

Feature Correlation Federated Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.