Search Results for author: Weizhan Zhang

Found 8 papers, 0 papers with code

OneActor: Consistent Character Generation via Cluster-Conditioned Guidance

no code implementations16 Apr 2024 Jiahao Wang, Caixia Yan, Haonan Lin, Weizhan Zhang

Comprehensive experiments show that our method outperforms a variety of baselines with satisfactory character consistency, superior prompt conformity as well as high image quality.

Consistent Character Generation Denoising +1

SFP: Spurious Feature-targeted Pruning for Out-of-Distribution Generalization

no code implementations19 May 2023 Yingchun Wang, Jingcai Guo, Yi Liu, Song Guo, Weizhan Zhang, Xiangyong Cao, Qinghua Zheng

Based on the idea that in-distribution (ID) data with spurious features may have a lower experience risk, in this paper, we propose a novel Spurious Feature-targeted model Pruning framework, dubbed SFP, to automatically explore invariant substructures without referring to the above drawbacks.

Out-of-Distribution Generalization

Data Quality-aware Mixed-precision Quantization via Hybrid Reinforcement Learning

no code implementations9 Feb 2023 Yingchun Wang, Jingcai Guo, Song Guo, Weizhan Zhang

Mixed-precision quantization mostly predetermines the model bit-width settings before actual training due to the non-differential bit-width sampling process, obtaining sub-optimal performance.

Quantization reinforcement-learning +1

Towards Fairer and More Efficient Federated Learning via Multidimensional Personalized Edge Models

no code implementations9 Feb 2023 Yingchun Wang, Jingcai Guo, Jie Zhang, Song Guo, Weizhan Zhang, Qinghua Zheng

Federated learning (FL) is an emerging technique that trains massive and geographically distributed edge data while maintaining privacy.

Computational Efficiency Fairness +1

Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning

no code implementations19 Dec 2022 Yingchun Wang, Jingcai Guo, Song Guo, Weizhan Zhang, Jie Zhang

Recent studies show that even highly biased dense networks contain an unbiased substructure that can achieve better out-of-distribution (OOD) generalization than the original model.

Out-of-Distribution Generalization

Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning

no code implementations7 Dec 2022 Yingchun Wang, Song Guo, Jingcai Guo, Weizhan Zhang, Yida Xu, Jie Zhang, Yi Liu

Extensive experiments based on small Cifar-10 and large-scaled ImageNet demonstrate that our method can obtain sparser networks with great generalization performance while providing quantified reliability for the pruned model.

Network Pruning Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.