Search Results for author: Shuheng Shen

Found 8 papers, 2 papers with code

Clean-image Backdoor Attacks

no code implementations22 Mar 2024 Dazhong Rong, Guoyao Yu, Shuheng Shen, Xinyi Fu, Peng Qian, Jianhai Chen, Qinming He, Xing Fu, Weiqiang Wang

To gather a significant quantity of annotated training data for high-performance image classification models, numerous companies opt to enlist third-party providers to label their unlabeled data.

Fairness Image Classification

Joint Local Relational Augmentation and Global Nash Equilibrium for Federated Learning with Non-IID Data

no code implementations17 Aug 2023 Xinting Liao, Chaochao Chen, Weiming Liu, Pengyang Zhou, Huabin Zhu, Shuheng Shen, Weiqiang Wang, Mengling Hu, Yanchao Tan, Xiaolin Zheng

In server, GNE reaches an agreement among inconsistent and discrepant model deviations from clients to server, which encourages the global model to update in the direction of global optimum without breaking down the clients optimization toward their local optimums.

Federated Learning

Differentially Private Learning with Per-Sample Adaptive Clipping

no code implementations1 Dec 2022 Tianyu Xia, Shuheng Shen, Su Yao, Xinyi Fu, Ke Xu, Xiaolong Xu, Xing Fu

As one way to implement privacy-preserving AI, differentially private learning is a framework that enables AI models to use differential privacy (DP).

Privacy Preserving

STL-SGD: Speeding Up Local SGD with Stagewise Communication Period

no code implementations11 Jun 2020 Shuheng Shen, Yifei Cheng, Jingchang Liu, Linli Xu

Distributed parallel stochastic gradient descent algorithms are workhorses for large scale machine learning tasks.

Variance Reduced Local SGD with Lower Communication Complexity

1 code implementation30 Dec 2019 Xianfeng Liang, Shuheng Shen, Jingchang Liu, Zhen Pan, Enhong Chen, Yifei Cheng

To accelerate the training of machine learning models, distributed stochastic gradient descent (SGD) and its variants have been widely adopted, which apply multiple workers in parallel to speed up training.

BIG-bench Machine Learning

Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent

no code implementations28 Jun 2019 Shuheng Shen, Linli Xu, Jingchang Liu, Xianfeng Liang, Yifei Cheng

Nevertheless, although distributed stochastic gradient descent (SGD) algorithms can achieve a linear iteration speedup, they are limited significantly in practice by the communication cost, making it difficult to achieve a linear time speedup.

Asynchronous Stochastic Composition Optimization with Variance Reduction

no code implementations15 Nov 2018 Shuheng Shen, Linli Xu, Jingchang Liu, Junliang Guo, Qing Ling

Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning.

Management

Cannot find the paper you are looking for? You can Submit a new open access paper.