Search Results for author: Shiqiang Wang

Found 35 papers, 8 papers with code

Straggler-Resilient Decentralized Learning via Adaptive Asynchronous Updates

no code implementations11 Jun 2023 Guojun Xiong, Gang Yan, Shiqiang Wang, Jian Li

With the increasing demand for large-scale training of machine learning models, fully decentralized optimization methods have recently been advocated as alternatives to the popular parameter server framework.

A Lightweight Method for Tackling Unknown Participation Probabilities in Federated Averaging

no code implementations6 Jun 2023 Shiqiang Wang, Mingyue Ji

Our theoretical results reveal important and interesting insights, while showing that FedAU converges to an optimal solution of the original objective and has desirable properties such as linear speedup.

Federated Learning

Adaptive Federated Pruning in Hierarchical Wireless Networks

no code implementations15 May 2023 Xiaonan Liu, Shiqiang Wang, Yansha Deng, Arumugam Nallanathan

We present the convergence analysis of an upper on the l2 norm of gradients for HFL with model pruning, analyze the computation and communication latency of the proposed model pruning scheme, and formulate an optimization problem to maximize the convergence rate under a given latency threshold by jointly optimizing the pruning ratio and wireless resource allocation.

Federated Learning Privacy Preserving

LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning

no code implementations3 May 2023 Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, Stacy Patterson

As part of the training, the parties wish to remove unimportant features in the system to improve generalization, efficiency, and explainability.

feature selection Federated Learning

Incentive Mechanism Design for Unbiased Federated Learning with Randomized Client Participation

no code implementations17 Apr 2023 Bing Luo, Yutong Feng, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

Incentive mechanism is crucial for federated learning (FL) when rational clients do not have the same interests in the global model as the server.

Federated Learning

Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning

no code implementations CVPR 2023 Hanjing Wang, Dhiraj Joshi, Shiqiang Wang, Qiang Ji

Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.

FedExP: Speeding Up Federated Averaging via Extrapolation

1 code implementation23 Jan 2023 Divyansh Jhunjhunwala, Shiqiang Wang, Gauri Joshi

Federated Averaging (FedAvg) remains the most popular algorithm for Federated Learning (FL) optimization due to its simple implementation, stateless nature, and privacy guarantees combined with secure aggregation.

Federated Learning

Federated Learning with Flexible Control

no code implementations16 Dec 2022 Shiqiang Wang, Jake Perazzone, Mingyue Ji, Kevin S. Chan

In this paper, we address this problem and propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly.

Federated Learning Stochastic Optimization

Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data

no code implementations16 Jun 2022 Timothy Castiglia, Anirban Das, Shiqiang Wang, Stacy Patterson

Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data.

Federated Learning Quantization

A Unified Analysis of Federated Learning with Arbitrary Client Participation

no code implementations26 May 2022 Shiqiang Wang, Mingyue Ji

Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.

Federated Learning

Joint Coreset Construction and Quantization for Distributed Machine Learning

no code implementations13 Apr 2022 Hanlin Lu, Changchang Liu, Shiqiang Wang, Ting He, Vijay Narayanan, Kevin S. Chan, Stephen Pasteris

Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs.

BIG-bench Machine Learning Quantization

Communication-Efficient Device Scheduling for Federated Learning Using Stochastic Optimization

no code implementations19 Jan 2022 Jake Perazzone, Shiqiang Wang, Mingyue Ji, Kevin Chan

Then, using the derived convergence bound, we use stochastic optimization to develop a new client selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication time under a transmit power constraint.

Federated Learning Privacy Preserving +2

KerGNNs: Interpretable Graph Neural Networks with Graph Kernels

1 code implementation3 Jan 2022 Aosong Feng, Chenyu You, Shiqiang Wang, Leandros Tassiulas

We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.

Graph Classification

Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling

no code implementations21 Dec 2021 Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

This paper aims to design an adaptive client sampling algorithm that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time.

Federated Learning

Cost-Effective Federated Learning in Mobile Edge Networks

no code implementations12 Sep 2021 Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model under the coordination of a central server without sharing their raw data.

Federated Learning

Cross-Silo Federated Learning for Multi-Tier Networks with Vertical and Horizontal Data Partitioning

no code implementations19 Aug 2021 Anirban Das, Timothy Castiglia, Shiqiang Wang, Stacy Patterson

Each silo contains a hub and a set of clients, with the silo's vertical data shard partitioned horizontally across its clients.

Federated Learning

Communication-efficient k-Means for Edge-based Machine Learning

no code implementations8 Feb 2021 Hanlin Lu, Ting He, Shiqiang Wang, Changchang Liu, Mehrdad Mahdavi, Vijaykrishnan Narayanan, Kevin S. Chan, Stephen Pasteris

We consider the problem of computing the k-means centers for a large high-dimensional dataset in the context of edge-based machine learning, where data sources offload machine learning computation to nearby edge servers.

BIG-bench Machine Learning Dimensionality Reduction +1

Tailored Learning-Based Scheduling for Kubernetes-Oriented Edge-Cloud System

no code implementations17 Jan 2021 Yiwen Han, Shihao Shen, Xiaofei Wang, Shiqiang Wang, Victor C. M. Leung

In this paper, we introduce KaiS, a learning-based scheduling framework for such edge-cloud systems to improve the long-term throughput rate of request processing.


Continual Learning Without Knowing Task Identities: Rethinking Occam's Razor

no code implementations1 Jan 2021 Tiffany Tuor, Shiqiang Wang, Kin Leung

Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones.

Continual Learning Model Selection

Cost-Effective Federated Learning Design

no code implementations15 Dec 2020 Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

In this paper, we analyze how to design adaptive FL that optimally chooses these essential control variables to minimize the total cost while ensuring convergence.

Federated Learning

Robustness and Diversity Seeking Data-Free Knowledge Distillation

1 code implementation7 Nov 2020 Pengchao Han, Jihong Park, Shiqiang Wang, Yejun Liu

Knowledge distillation (KD) has enabled remarkable progress in model compression and knowledge transfer.

Knowledge Distillation Model Compression +1

Demystifying Why Local Aggregation Helps: Convergence Analysis of Hierarchical SGD

1 code implementation24 Oct 2020 Jiayi Wang, Shiqiang Wang, Rong-Rong Chen, Mingyue Ji

Furthermore, we extend our analytical approach based on "upward" and "downward" divergences to study the convergence for the general case of H-SGD with more than two levels, where the "sandwich behavior" still holds.

Federated Learning

Sharing Models or Coresets: A Study based on Membership Inference Attack

no code implementations6 Jul 2020 Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang, Kevin S. Chan

Distributed machine learning generally aims at training a global model based on distributed data without collecting all the data to a centralized location, where two different approaches have been proposed: collecting and aggregating local models (federated learning) and collecting and training over representative data summaries (coreset).

Federated Learning Inference Attack +1

Online Learning of Facility Locations

no code implementations6 Jul 2020 Stephen Pasteris, Ting He, Fabio Vitale, Shiqiang Wang, Mark Herbster

In this paper, we provide a rigorous theoretical investigation of an online learning version of the Facility Location problem which is motivated by emerging problems in real-world applications.

Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice

1 code implementation NeurIPS 2020 Shufan Wang, Jian Li, Shiqiang Wang

We obtain both deterministic and randomized online algorithms with provably improved performance when either a single or multiple ML predictions are used to make decisions.

Decision Making

Overcoming Noisy and Irrelevant Data in Federated Learning

no code implementations22 Jan 2020 Tiffany Tuor, Shiqiang Wang, Bong Jun Ko, Changchang Liu, Kin K. Leung

A challenge is that among the large variety of data collected at each client, it is likely that only a subset is relevant for a learning task while the rest of data has a negative impact on model training.

Federated Learning

Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach

no code implementations14 Jan 2020 Pengchao Han, Shiqiang Wang, Kin K. Leung

Then, with the goal of minimizing the overall training time, we propose a novel online learning formulation and algorithm for automatically determining the near-optimal communication and computation trade-off that is controlled by the degree of gradient sparsity.

Fairness Federated Learning

Model Pruning Enables Efficient Federated Learning on Edge Devices

2 code implementations26 Sep 2019 Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K. Leung, Leandros Tassiulas

To overcome this challenge, we propose PruneFL -- a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.

Federated Learning

Online Collection and Forecasting of Resource Utilization in Large-Scale Distributed Systems

no code implementations22 May 2019 Tiffany Tuor, Shiqiang Wang, Kin K. Leung, Bong Jun Ko

Monitoring the conditions of these nodes is important for system management purposes, which, however, can be extremely resource demanding as this requires collecting local measurements of each individual node and constantly sending those measurements to a central controller.

Anomaly Detection Distributed Computing +2

Robust Coreset Construction for Distributed Machine Learning

no code implementations11 Apr 2019 Hanlin Lu, Ming-Ju Li, Ting He, Shiqiang Wang, Vijaykrishnan Narayanan, Kevin S. Chan

Coreset, which is a summary of the original dataset in the form of a small weighted set in the same sample space, provides a promising approach to enable machine learning over distributed data.

BIG-bench Machine Learning Clustering

MaxHedge: Maximising a Maximum Online

no code implementations28 Oct 2018 Stephen Pasteris, Fabio Vitale, Kevin Chan, Shiqiang Wang, Mark Herbster

We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set.

Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process

1 code implementation17 Jun 2015 Shiqiang Wang, Rahul Urgaonkar, Murtaza Zafer, Ting He, Kevin Chan, Kin K. Leung

In mobile edge computing, local edge servers can host cloud-based services, which reduces network overhead and latency but requires service migrations as users move to new locations.

Distributed, Parallel, and Cluster Computing Networking and Internet Architecture Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.