Search Results for author: Shiqiang Wang

Found 42 papers, 9 papers with code

Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process

1 code implementation17 Jun 2015 Shiqiang Wang, Rahul Urgaonkar, Murtaza Zafer, Ting He, Kevin Chan, Kin K. Leung

In mobile edge computing, local edge servers can host cloud-based services, which reduces network overhead and latency but requires service migrations as users move to new locations.

Distributed, Parallel, and Cluster Computing Networking and Internet Architecture Optimization and Control

MaxHedge: Maximising a Maximum Online

no code implementations28 Oct 2018 Stephen Pasteris, Fabio Vitale, Kevin Chan, Shiqiang Wang, Mark Herbster

We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set.

Robust Coreset Construction for Distributed Machine Learning

no code implementations11 Apr 2019 Hanlin Lu, Ming-Ju Li, Ting He, Shiqiang Wang, Vijaykrishnan Narayanan, Kevin S. Chan

Coreset, which is a summary of the original dataset in the form of a small weighted set in the same sample space, provides a promising approach to enable machine learning over distributed data.

BIG-bench Machine Learning Clustering

Online Collection and Forecasting of Resource Utilization in Large-Scale Distributed Systems

no code implementations22 May 2019 Tiffany Tuor, Shiqiang Wang, Kin K. Leung, Bong Jun Ko

Monitoring the conditions of these nodes is important for system management purposes, which, however, can be extremely resource demanding as this requires collecting local measurements of each individual node and constantly sending those measurements to a central controller.

Anomaly Detection Distributed Computing +2

Model Pruning Enables Efficient Federated Learning on Edge Devices

2 code implementations26 Sep 2019 Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K. Leung, Leandros Tassiulas

To overcome this challenge, we propose PruneFL -- a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.

Federated Learning

Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach

no code implementations14 Jan 2020 Pengchao Han, Shiqiang Wang, Kin K. Leung

Then, with the goal of minimizing the overall training time, we propose a novel online learning formulation and algorithm for automatically determining the near-optimal communication and computation trade-off that is controlled by the degree of gradient sparsity.

Fairness Federated Learning

Overcoming Noisy and Irrelevant Data in Federated Learning

no code implementations22 Jan 2020 Tiffany Tuor, Shiqiang Wang, Bong Jun Ko, Changchang Liu, Kin K. Leung

A challenge is that among the large variety of data collected at each client, it is likely that only a subset is relevant for a learning task while the rest of data has a negative impact on model training.

Federated Learning

Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice

1 code implementation NeurIPS 2020 Shufan Wang, Jian Li, Shiqiang Wang

We obtain both deterministic and randomized online algorithms with provably improved performance when either a single or multiple ML predictions are used to make decisions.

Decision Making

Online Learning of Facility Locations

no code implementations6 Jul 2020 Stephen Pasteris, Ting He, Fabio Vitale, Shiqiang Wang, Mark Herbster

In this paper, we provide a rigorous theoretical investigation of an online learning version of the Facility Location problem which is motivated by emerging problems in real-world applications.

Sharing Models or Coresets: A Study based on Membership Inference Attack

no code implementations6 Jul 2020 Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang, Kevin S. Chan

Distributed machine learning generally aims at training a global model based on distributed data without collecting all the data to a centralized location, where two different approaches have been proposed: collecting and aggregating local models (federated learning) and collecting and training over representative data summaries (coreset).

Federated Learning Inference Attack +1

Demystifying Why Local Aggregation Helps: Convergence Analysis of Hierarchical SGD

1 code implementation24 Oct 2020 Jiayi Wang, Shiqiang Wang, Rong-Rong Chen, Mingyue Ji

Furthermore, we extend our analytical approach based on "upward" and "downward" divergences to study the convergence for the general case of H-SGD with more than two levels, where the "sandwich behavior" still holds.

Federated Learning

Cost-Effective Federated Learning Design

no code implementations15 Dec 2020 Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

In this paper, we analyze how to design adaptive FL that optimally chooses these essential control variables to minimize the total cost while ensuring convergence.

Federated Learning

Continual Learning Without Knowing Task Identities: Rethinking Occam's Razor

no code implementations1 Jan 2021 Tiffany Tuor, Shiqiang Wang, Kin Leung

Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones.

Continual Learning Model Selection

Tailored Learning-Based Scheduling for Kubernetes-Oriented Edge-Cloud System

no code implementations17 Jan 2021 Yiwen Han, Shihao Shen, Xiaofei Wang, Shiqiang Wang, Victor C. M. Leung

In this paper, we introduce KaiS, a learning-based scheduling framework for such edge-cloud systems to improve the long-term throughput rate of request processing.

Scheduling

Communication-efficient k-Means for Edge-based Machine Learning

no code implementations8 Feb 2021 Hanlin Lu, Ting He, Shiqiang Wang, Changchang Liu, Mehrdad Mahdavi, Vijaykrishnan Narayanan, Kevin S. Chan, Stephen Pasteris

We consider the problem of computing the k-means centers for a large high-dimensional dataset in the context of edge-based machine learning, where data sources offload machine learning computation to nearby edge servers.

BIG-bench Machine Learning Dimensionality Reduction +1

Cross-Silo Federated Learning for Multi-Tier Networks with Vertical and Horizontal Data Partitioning

no code implementations19 Aug 2021 Anirban Das, Timothy Castiglia, Shiqiang Wang, Stacy Patterson

Each silo contains a hub and a set of clients, with the silo's vertical data shard partitioned horizontally across its clients.

Federated Learning

Cost-Effective Federated Learning in Mobile Edge Networks

no code implementations12 Sep 2021 Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model under the coordination of a central server without sharing their raw data.

Federated Learning

Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling

no code implementations21 Dec 2021 Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

This paper aims to design an adaptive client sampling algorithm that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time.

Federated Learning

KerGNNs: Interpretable Graph Neural Networks with Graph Kernels

1 code implementation3 Jan 2022 Aosong Feng, Chenyu You, Shiqiang Wang, Leandros Tassiulas

We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.

Graph Classification

Communication-Efficient Device Scheduling for Federated Learning Using Stochastic Optimization

no code implementations19 Jan 2022 Jake Perazzone, Shiqiang Wang, Mingyue Ji, Kevin Chan

Then, using the derived convergence bound, we use stochastic optimization to develop a new client selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication time under a transmit power constraint.

Federated Learning Privacy Preserving +2

Joint Coreset Construction and Quantization for Distributed Machine Learning

no code implementations13 Apr 2022 Hanlin Lu, Changchang Liu, Shiqiang Wang, Ting He, Vijay Narayanan, Kevin S. Chan, Stephen Pasteris

Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs.

BIG-bench Machine Learning Quantization

A Unified Analysis of Federated Learning with Arbitrary Client Participation

no code implementations26 May 2022 Shiqiang Wang, Mingyue Ji

Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.

Federated Learning

Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data

no code implementations16 Jun 2022 Timothy Castiglia, Anirban Das, Shiqiang Wang, Stacy Patterson

Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data.

Quantization Vertical Federated Learning

Federated Learning with Flexible Control

no code implementations16 Dec 2022 Shiqiang Wang, Jake Perazzone, Mingyue Ji, Kevin S. Chan

In this paper, we address this problem and propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly.

Federated Learning Stochastic Optimization

FedExP: Speeding Up Federated Averaging via Extrapolation

2 code implementations23 Jan 2023 Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi

Federated Averaging (FedAvg) remains the most popular algorithm for Federated Learning (FL) optimization due to its simple implementation, stateless nature, and privacy guarantees combined with secure aggregation.

Federated Learning

Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning

no code implementations CVPR 2023 Hanjing Wang, Dhiraj Joshi, Shiqiang Wang, Qiang Ji

Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.

Uncertainty Quantification

Incentive Mechanism Design for Unbiased Federated Learning with Randomized Client Participation

no code implementations17 Apr 2023 Bing Luo, Yutong Feng, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

Incentive mechanism is crucial for federated learning (FL) when rational clients do not have the same interests in the global model as the server.

Federated Learning

LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning

no code implementations3 May 2023 Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, Stacy Patterson

As part of the training, the parties wish to remove unimportant features in the system to improve generalization, efficiency, and explainability.

feature selection Vertical Federated Learning

Adaptive Federated Pruning in Hierarchical Wireless Networks

no code implementations15 May 2023 Xiaonan Liu, Shiqiang Wang, Yansha Deng, Arumugam Nallanathan

We present the convergence analysis of an upper on the l2 norm of gradients for HFL with model pruning, analyze the computation and communication latency of the proposed model pruning scheme, and formulate an optimization problem to maximize the convergence rate under a given latency threshold by jointly optimizing the pruning ratio and wireless resource allocation.

Federated Learning Privacy Preserving

A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging

no code implementations6 Jun 2023 Shiqiang Wang, Mingyue Ji

In this paper, we address this problem by adapting the aggregation weights in federated averaging (FedAvg) based on the participation history of each client.

Federated Learning

Straggler-Resilient Decentralized Learning via Adaptive Asynchronous Updates

no code implementations11 Jun 2023 Guojun Xiong, Gang Yan, Shiqiang Wang, Jian Li

With the increasing demand for large-scale training of machine learning models, fully decentralized optimization methods have recently been advocated as alternatives to the popular parameter server framework.

Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly

no code implementations4 Oct 2023 Herbert Woisetschläger, Alexander Isenko, Shiqiang Wang, Ruben Mayer, Hans-Arno Jacobsen

Large Language Models (LLM) and foundation models are popular as they offer new opportunities for individuals and businesses to improve natural language processing, interact with data, and retrieve information faster.

Computational Efficiency Edge-computing +2

DePRL: Achieving Linear Convergence Speedup in Personalized Decentralized Learning with Shared Representations

no code implementations17 Dec 2023 Guojun Xiong, Gang Yan, Shiqiang Wang, Jian Li

Decentralized learning has emerged as an alternative method to the popular parameter-server framework which suffers from high communication burden, single-point failure and scalability issues due to the need of a central server.

Learning Theory Representation Learning

Federated Learning While Providing Model as a Service: Joint Training and Inference Optimization

no code implementations20 Dec 2023 Pengchao Han, Shiqiang Wang, Yang Jiao, Jianwei Huang

Toward the challenges, we propose an online problem approximation to reduce the problem complexity and optimize the resources to balance the needs of model training and inference.

Federated Learning Inference Optimization

A Survey on Efficient Federated Learning Methods for Foundation Model Training

no code implementations9 Jan 2024 Herbert Woisetschläger, Alexander Isenko, Shiqiang Wang, Ruben Mayer, Hans-Arno Jacobsen

We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications, elaborate on the readiness of FL frameworks to work with FMs and provide future research opportunities on how to evaluate generative models in FL as well as the interplay of privacy and PEFT.

Federated Learning Privacy Preserving

FedFisher: Leveraging Fisher Information for One-Shot Federated Learning

1 code implementation19 Mar 2024 Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi

Standard federated learning (FL) algorithms typically require multiple rounds of communication between the server and the clients, which has several drawbacks, including requiring constant network connectivity, repeated investment of computational resources, and susceptibility to privacy attacks.

Federated Learning

Communication-Efficient Hybrid Federated Learning for E-health with Horizontal and Vertical Data Partitioning

no code implementations15 Apr 2024 Chong Yu, Shuaiqi Shen, Shiqiang Wang, Kuan Zhang, Hai Zhao

In this paper, we provide a thorough study on an effective integration of HFL and VFL, to achieve communication efficiency and overcome the above limitations when data is both horizontally and vertically partitioned.

Vertical Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.