no code implementations • 2 Sep 2024 • Divyansh Jhunjhunwala, Neharika Jali, Gauri Joshi, Shiqiang Wang
Erasure-coded computing has been successfully used in cloud systems to reduce tail latency caused by factors such as straggling servers and heterogeneous traffic variations.
no code implementations • 5 Aug 2024 • Cho-Chun Chiu, Tuan Nguyen, Ting He, Shiqiang Wang, Beom-Su Kim, Ki-Il Kim
These challenges make our problem fundamentally different from classical active learning, where unlabeled samples are free and labels can be queried in real time.
1 code implementation • 25 Jul 2024 • Yujia Wang, Shiqiang Wang, Songtao Lu, Jinghui Chen
Federated learning (FL) has emerged as a widely adopted training paradigm for privacy-preserving machine learning.
no code implementations • 22 Jul 2024 • Jiayi Wang, Shiqiang Wang, Rong-Rong Chen, Mingyue Ji
In particular, for many FL algorithms, the convergence rate grows dramatically when the number of local updates becomes large, especially when the product of the gradient divergence and local Lipschitz constant is large.
1 code implementation • 22 Apr 2024 • Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas
This paper aims to design an adaptive client sampling algorithm for FL over wireless networks that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time.
no code implementations • 15 Apr 2024 • Chong Yu, Shuaiqi Shen, Shiqiang Wang, Kuan Zhang, Hai Zhao
In this paper, we provide a thorough study on an effective integration of HFL and VFL, to achieve communication efficiency and overcome the above limitations when data is both horizontally and vertically partitioned.
1 code implementation • 19 Mar 2024 • Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi
Standard federated learning (FL) algorithms typically require multiple rounds of communication between the server and the clients, which has several drawbacks, including requiring constant network connectivity, repeated investment of computational resources, and susceptibility to privacy attacks.
no code implementations • 5 Feb 2024 • Herbert Woisetschläger, Alexander Erben, Bill Marino, Shiqiang Wang, Nicholas D. Lane, Ruben Mayer, Hans-Arno Jacobsen
The age of AI regulation is upon us, with the European Union Artificial Intelligence Act (AI Act) leading the way.
no code implementations • 9 Jan 2024 • Herbert Woisetschläger, Alexander Isenko, Shiqiang Wang, Ruben Mayer, Hans-Arno Jacobsen
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications, elaborate on the readiness of FL frameworks to work with FMs, and provide future research opportunities on how to evaluate generative models in FL as well as the interplay of privacy and PEFT.
no code implementations • 20 Dec 2023 • Pengchao Han, Shiqiang Wang, Yang Jiao, Jianwei Huang
Toward the challenges, we propose an online problem approximation to reduce the problem complexity and optimize the resources to balance the needs of model training and inference.
no code implementations • 17 Dec 2023 • Guojun Xiong, Gang Yan, Shiqiang Wang, Jian Li
Decentralized learning has emerged as an alternative method to the popular parameter-server framework which suffers from high communication burden, single-point failure and scalability issues due to the need of a central server.
no code implementations • 4 Oct 2023 • Herbert Woisetschläger, Alexander Isenko, Shiqiang Wang, Ruben Mayer, Hans-Arno Jacobsen
Large Language Models (LLM) and foundation models are popular as they offer new opportunities for individuals and businesses to improve natural language processing, interact with data, and retrieve information faster.
no code implementations • 11 Jun 2023 • Guojun Xiong, Gang Yan, Shiqiang Wang, Jian Li
With the increasing demand for large-scale training of machine learning models, fully decentralized optimization methods have recently been advocated as alternatives to the popular parameter server framework.
no code implementations • 6 Jun 2023 • Shiqiang Wang, Mingyue Ji
In this paper, we address this problem by adapting the aggregation weights in federated averaging (FedAvg) based on the participation history of each client.
no code implementations • 15 May 2023 • Xiaonan Liu, Shiqiang Wang, Yansha Deng, Arumugam Nallanathan
We present the convergence analysis of an upper on the l2 norm of gradients for HFL with model pruning, analyze the computation and communication latency of the proposed model pruning scheme, and formulate an optimization problem to maximize the convergence rate under a given latency threshold by jointly optimizing the pruning ratio and wireless resource allocation.
no code implementations • 3 May 2023 • Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, Stacy Patterson
As part of the training, the parties wish to remove unimportant features in the system to improve generalization, efficiency, and explainability.
no code implementations • 17 Apr 2023 • Bing Luo, Yutong Feng, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas
Incentive mechanism is crucial for federated learning (FL) when rational clients do not have the same interests in the global model as the server.
no code implementations • CVPR 2023 • Hanjing Wang, Dhiraj Joshi, Shiqiang Wang, Qiang Ji
Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.
2 code implementations • 23 Jan 2023 • Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi
Federated Averaging (FedAvg) remains the most popular algorithm for Federated Learning (FL) optimization due to its simple implementation, stateless nature, and privacy guarantees combined with secure aggregation.
no code implementations • 16 Dec 2022 • Shiqiang Wang, Jake Perazzone, Mingyue Ji, Kevin S. Chan
In this paper, we address this problem and propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly.
no code implementations • 16 Jun 2022 • Timothy Castiglia, Anirban Das, Shiqiang Wang, Stacy Patterson
Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data.
no code implementations • 26 May 2022 • Shiqiang Wang, Mingyue Ji
Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.
no code implementations • 13 Apr 2022 • Hanlin Lu, Changchang Liu, Shiqiang Wang, Ting He, Vijay Narayanan, Kevin S. Chan, Stephen Pasteris
Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs.
no code implementations • 19 Jan 2022 • Jake Perazzone, Shiqiang Wang, Mingyue Ji, Kevin Chan
Then, using the derived convergence bound, we use stochastic optimization to develop a new client selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication time under a transmit power constraint.
1 code implementation • 3 Jan 2022 • Aosong Feng, Chenyu You, Shiqiang Wang, Leandros Tassiulas
We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.
no code implementations • 21 Dec 2021 • Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas
This paper aims to design an adaptive client sampling algorithm that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time.
no code implementations • 12 Sep 2021 • Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas
Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model under the coordination of a central server without sharing their raw data.
no code implementations • 19 Aug 2021 • Anirban Das, Timothy Castiglia, Shiqiang Wang, Stacy Patterson
Each silo contains a hub and a set of clients, with the silo's vertical data shard partitioned horizontally across its clients.
no code implementations • 8 Feb 2021 • Hanlin Lu, Ting He, Shiqiang Wang, Changchang Liu, Mehrdad Mahdavi, Vijaykrishnan Narayanan, Kevin S. Chan, Stephen Pasteris
We consider the problem of computing the k-means centers for a large high-dimensional dataset in the context of edge-based machine learning, where data sources offload machine learning computation to nearby edge servers.
no code implementations • 17 Jan 2021 • Yiwen Han, Shihao Shen, Xiaofei Wang, Shiqiang Wang, Victor C. M. Leung
In this paper, we introduce KaiS, a learning-based scheduling framework for such edge-cloud systems to improve the long-term throughput rate of request processing.
no code implementations • 1 Jan 2021 • Tiffany Tuor, Shiqiang Wang, Kin Leung
Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones.
no code implementations • 15 Dec 2020 • Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas
In this paper, we analyze how to design adaptive FL that optimally chooses these essential control variables to minimize the total cost while ensuring convergence.
1 code implementation • 7 Nov 2020 • Pengchao Han, Jihong Park, Shiqiang Wang, Yejun Liu
Knowledge distillation (KD) has enabled remarkable progress in model compression and knowledge transfer.
1 code implementation • 24 Oct 2020 • Jiayi Wang, Shiqiang Wang, Rong-Rong Chen, Mingyue Ji
Furthermore, we extend our analytical approach based on "upward" and "downward" divergences to study the convergence for the general case of H-SGD with more than two levels, where the "sandwich behavior" still holds.
no code implementations • 6 Jul 2020 • Stephen Pasteris, Ting He, Fabio Vitale, Shiqiang Wang, Mark Herbster
In this paper, we provide a rigorous theoretical investigation of an online learning version of the Facility Location problem which is motivated by emerging problems in real-world applications.
no code implementations • 6 Jul 2020 • Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang, Kevin S. Chan
Distributed machine learning generally aims at training a global model based on distributed data without collecting all the data to a centralized location, where two different approaches have been proposed: collecting and aggregating local models (federated learning) and collecting and training over representative data summaries (coreset).
no code implementations • 25 Feb 2020 • Ahmed Imteaj, Urmish Thakker, Shiqiang Wang, Jian Li, M. Hadi Amini
Nowadays, devices are equipped with advanced sensors with higher processing/computing capabilities.
1 code implementation • NeurIPS 2020 • Shufan Wang, Jian Li, Shiqiang Wang
We obtain both deterministic and randomized online algorithms with provably improved performance when either a single or multiple ML predictions are used to make decisions.
no code implementations • 22 Jan 2020 • Tiffany Tuor, Shiqiang Wang, Bong Jun Ko, Changchang Liu, Kin K. Leung
A challenge is that among the large variety of data collected at each client, it is likely that only a subset is relevant for a learning task while the rest of data has a negative impact on model training.
no code implementations • 14 Jan 2020 • Pengchao Han, Shiqiang Wang, Kin K. Leung
Then, with the goal of minimizing the overall training time, we propose a novel online learning formulation and algorithm for automatically determining the near-optimal communication and computation trade-off that is controlled by the degree of gradient sparsity.
2 code implementations • 26 Sep 2019 • Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K. Leung, Leandros Tassiulas
To overcome this challenge, we propose PruneFL -- a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.
no code implementations • 16 Aug 2019 • Jihong Park, Shiqiang Wang, Anis Elgabli, Seungeun Oh, Eunjeong Jeong, Han Cha, Hyesung Kim, Seong-Lyun Kim, Mehdi Bennis
Devices at the edge of wireless networks are the last mile data sources for machine learning (ML).
no code implementations • 22 May 2019 • Tiffany Tuor, Shiqiang Wang, Kin K. Leung, Bong Jun Ko
Monitoring the conditions of these nodes is important for system management purposes, which, however, can be extremely resource demanding as this requires collecting local measurements of each individual node and constantly sending those measurements to a central controller.
no code implementations • 11 Apr 2019 • Hanlin Lu, Ming-Ju Li, Ting He, Shiqiang Wang, Vijaykrishnan Narayanan, Kevin S. Chan
Coreset, which is a summary of the original dataset in the form of a small weighted set in the same sample space, provides a promising approach to enable machine learning over distributed data.
no code implementations • 28 Oct 2018 • Stephen Pasteris, Fabio Vitale, Kevin Chan, Shiqiang Wang, Mark Herbster
We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set.
1 code implementation • 14 Apr 2018 • Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, Kevin Chan
Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches.
1 code implementation • 17 Jun 2015 • Shiqiang Wang, Rahul Urgaonkar, Murtaza Zafer, Ting He, Kevin Chan, Kin K. Leung
In mobile edge computing, local edge servers can host cloud-based services, which reduces network overhead and latency but requires service migrations as users move to new locations.
Distributed, Parallel, and Cluster Computing Networking and Internet Architecture Optimization and Control