1 code implementation • 20 Jun 2024 • Yunfei Liu, Jintang Li, Yuehe Chen, Ruofan Wu, Ericbk Wang, Jing Zhou, Sheng Tian, Shuheng Shen, Xing Fu, Changhua Meng, Weiqiang Wang, Liang Chen
Another promising line of research involves the adoption of modularity maximization, a popular and effective measure for community detection, as the guiding principle for clustering tasks.
no code implementations • 20 Jun 2024 • Ke Wang, Tianyu Xia, Zhangxuan Gu, Yi Zhao, Shuheng Shen, Changhua Meng, Weiqiang Wang, Ke Xu
Online GUI navigation on mobile devices has driven a lot of attention recent years since it contributes to many real-world applications.
no code implementations • 22 Mar 2024 • Dazhong Rong, Guoyao Yu, Shuheng Shen, Xinyi Fu, Peng Qian, Jianhai Chen, Qinming He, Xing Fu, Weiqiang Wang
To gather a significant quantity of annotated training data for high-performance image classification models, numerous companies opt to enlist third-party providers to label their unlabeled data.
no code implementations • 17 Aug 2023 • Xinting Liao, Chaochao Chen, Weiming Liu, Pengyang Zhou, Huabin Zhu, Shuheng Shen, Weiqiang Wang, Mengling Hu, Yanchao Tan, Xiaolin Zheng
In server, GNE reaches an agreement among inconsistent and discrepant model deviations from clients to server, which encourages the global model to update in the direction of global optimum without breaking down the clients optimization toward their local optimums.
no code implementations • 1 Dec 2022 • Tianyu Xia, Shuheng Shen, Su Yao, Xinyi Fu, Ke Xu, Xiaolong Xu, Xing Fu
As one way to implement privacy-preserving AI, differentially private learning is a framework that enables AI models to use differential privacy (DP).
1 code implementation • 15 Jan 2022 • Chengqiang Lu, Mingyang Yin, Shuheng Shen, Luo Ji, Qi Liu, Hongxia Yang
Recommendation system has been a widely studied task both in academia and industry.
no code implementations • 11 Jun 2020 • Shuheng Shen, Yifei Cheng, Jingchang Liu, Linli Xu
Distributed parallel stochastic gradient descent algorithms are workhorses for large scale machine learning tasks.
1 code implementation • 30 Dec 2019 • Xianfeng Liang, Shuheng Shen, Jingchang Liu, Zhen Pan, Enhong Chen, Yifei Cheng
To accelerate the training of machine learning models, distributed stochastic gradient descent (SGD) and its variants have been widely adopted, which apply multiple workers in parallel to speed up training.
no code implementations • 28 Jun 2019 • Shuheng Shen, Linli Xu, Jingchang Liu, Xianfeng Liang, Yifei Cheng
Nevertheless, although distributed stochastic gradient descent (SGD) algorithms can achieve a linear iteration speedup, they are limited significantly in practice by the communication cost, making it difficult to achieve a linear time speedup.
no code implementations • 15 Nov 2018 • Shuheng Shen, Linli Xu, Jingchang Liu, Junliang Guo, Qing Ling
Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning.