Search Results for author: Chaoyue Niu

Found 12 papers, 4 papers with code

DC-CCL: Device-Cloud Collaborative Controlled Learning for Large Vision Models

no code implementations18 Mar 2023 Yucheng Ding, Chaoyue Niu, Fan Wu, Shaojie Tang, Chengfei Lyu, Guihai Chen

In this work, we propose a device-cloud collaborative controlled learning framework, called DC-CCL, enabling a cloud-side large vision model that cannot be directly deployed on the mobile device to still benefit from the device-side local samples.

Knowledge Distillation

One-Time Model Adaptation to Heterogeneous Clients: An Intra-Client and Inter-Image Attention Design

no code implementations11 Nov 2022 Yikai Yan, Chaoyue Niu, Fan Wu, Qinya Li, Shaojie Tang, Chengfei Lyu, Guihai Chen

The mainstream workflow of image recognition applications is first training one global model on the cloud for a wide range of classes and then serving numerous clients, each with heterogeneous images from a small subset of classes to be recognized.

On-Device Model Fine-Tuning with Label Correction in Recommender Systems

no code implementations21 Oct 2022 Yucheng Ding, Chaoyue Niu, Fan Wu, Shaojie Tang, Chengfei Lyu, Guihai Chen

To meet the practical requirements of low latency, low cost, and good privacy in online intelligent services, more and more deep learning models are offloaded from the cloud to mobile devices.

Click-Through Rate Prediction Recommendation Systems

Walle: An End-to-End, General-Purpose, and Large-Scale Production System for Device-Cloud Collaborative Machine Learning

no code implementations30 May 2022 Chengfei Lv, Chaoyue Niu, Renjie Gu, Xiaotang Jiang, Zhaode Wang, Bin Liu, Ziqi Wu, Qiulin Yao, Congyu Huang, Panos Huang, Tao Huang, Hui Shu, Jinde Song, Bin Zou, Peng Lan, Guohuan Xu, Fei Wu, Shaojie Tang, Fan Wu, Guihai Chen

Walle consists of a deployment platform, distributing ML tasks to billion-scale devices in time; a data pipeline, efficiently preparing task input; and a compute container, providing a cross-platform and high-performance execution environment, while facilitating daily task iteration.

On-Device Learning with Cloud-Coordinated Data Augmentation for Extreme Model Personalization in Recommender Systems

no code implementations24 Jan 2022 Renjie Gu, Chaoyue Niu, Yikai Yan, Fan Wu, Shaojie Tang, Rongfeng Jia, Chengfei Lyu, Guihai Chen

Data heterogeneity is an intrinsic property of recommender systems, making models trained over the global data on the cloud, which is the mainstream in industry, non-optimal to each individual user's local data distribution.

Data Augmentation Recommendation Systems

Federated Submodel Optimization for Hot and Cold Data Features

1 code implementation16 Sep 2021 Yucheng Ding, Chaoyue Niu, Fan Wu, Shaojie Tang, Chengfei Lv, Yanghe Feng, Guihai Chen

We theoretically proved the convergence rate of FedSubAvg by deriving an upper bound under a new metric called the element-wise gradient norm.

Federated Learning

Toward Understanding the Influence of Individual Clients in Federated Learning

no code implementations20 Dec 2020 Yihao Xue, Chaoyue Niu, Zhenzhe Zheng, Shaojie Tang, Chengfei Lv, Fan Wu, Guihai Chen

Federated learning allows mobile clients to jointly train a global model without sending their private data to a central server.

Federated Learning

Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability

1 code implementation18 Feb 2020 Yikai Yan, Chaoyue Niu, Yucheng Ding, Zhenzhe Zheng, Fan Wu, Guihai Chen, Shaojie Tang, Zhihua Wu

In this work, we consider a practical and ubiquitous issue when deploying federated learning in mobile environments: intermittent client availability, where the set of eligible clients may change during the training process.

Benchmarking Federated Learning

Online Pricing with Reserve Price Constraint for Personal Data Markets

1 code implementation28 Nov 2019 Chaoyue Niu, Zhenzhe Zheng, Fan Wu, Shaojie Tang, Guihai Chen

The analysis and evaluation results reveal that our proposed pricing mechanism incurs low practical regret, online latency, and memory overhead, and also demonstrate that the existence of reserve price can mitigate the cold-start problem in a posted price mechanism, and thus can reduce the cumulative regret.

Secure Federated Submodel Learning

1 code implementation6 Nov 2019 Chaoyue Niu, Fan Wu, Shaojie Tang, Lifeng Hua, Rongfei Jia, Chengfei Lv, Zhihua Wu, Guihai Chen

Nevertheless, the "position" of a client's truly required submodel corresponds to her private data, and its disclosure to the cloud server during interactions inevitably breaks the tenet of federated learning.

Federated Learning Position

From Server-Based to Client-Based Machine Learning: A Comprehensive Survey

no code implementations18 Sep 2019 Renjie Gu, Chaoyue Niu, Fan Wu, Guihai Chen, Chun Hu, Chengfei Lyu, Zhihua Wu

Another benefit is the bandwidth reduction because various kinds of local data can be involved in the training process without being uploaded.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.