Search Results for author: Shiqiang Wang

Found 26 papers, 7 papers with code

Joint Coreset Construction and Quantization for Distributed Machine Learning

no code implementations13 Apr 2022 Hanlin Lu, Changchang Liu, Shiqiang Wang, Ting He, Vijay Narayanan, Kevin S. Chan, Stephen Pasteris

Coresets are small, weighted summaries of larger datasets, aiming at providing provable error bounds for machine learning (ML) tasks while significantly reducing the communication and computation costs.


Communication-Efficient Device Scheduling for Federated Learning Using Stochastic Optimization

no code implementations19 Jan 2022 Jake Perazzone, Shiqiang Wang, Mingyue Ji, Kevin Chan

Then, using the derived convergence bound, we use stochastic optimization to develop a new client selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication time under a transmit power constraint.

Federated Learning Stochastic Optimization

KerGNNs: Interpretable Graph Neural Networks with Graph Kernels

1 code implementation3 Jan 2022 Aosong Feng, Chenyu You, Shiqiang Wang, Leandros Tassiulas

We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.

Graph Classification

Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling

no code implementations21 Dec 2021 Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

This paper aims to design an adaptive client sampling algorithm that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time.

Federated Learning

Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data

no code implementations29 Sep 2021 Timothy Castiglia, Anirban Das, Shiqiang Wang, Stacy Patterson

Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data.

Federated Learning Quantization

Cost-Effective Federated Learning in Mobile Edge Networks

no code implementations12 Sep 2021 Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model under the coordination of a central server without sharing their raw data.

Federated Learning

Cross-Silo Federated Learning for Multi-Tier Networks with Vertical and Horizontal Data Partitioning

no code implementations19 Aug 2021 Anirban Das, Timothy Castiglia, Shiqiang Wang, Stacy Patterson

Each silo contains a hub and a set of clients, with the silo's vertical data shard partitioned horizontally across its clients.

Federated Learning

Communication-efficient k-Means for Edge-based Machine Learning

no code implementations8 Feb 2021 Hanlin Lu, Ting He, Shiqiang Wang, Changchang Liu, Mehrdad Mahdavi, Vijaykrishnan Narayanan, Kevin S. Chan, Stephen Pasteris

We consider the problem of computing the k-means centers for a large high-dimensional dataset in the context of edge-based machine learning, where data sources offload machine learning computation to nearby edge servers.

Dimensionality Reduction Quantization

Tailored Learning-Based Scheduling for Kubernetes-Oriented Edge-Cloud System

no code implementations17 Jan 2021 Yiwen Han, Shihao Shen, Xiaofei Wang, Shiqiang Wang, Victor C. M. Leung

In this paper, we introduce KaiS, a learning-based scheduling framework for such edge-cloud systems to improve the long-term throughput rate of request processing.

Continual Learning Without Knowing Task Identities: Rethinking Occam's Razor

no code implementations1 Jan 2021 Tiffany Tuor, Shiqiang Wang, Kin Leung

Due to the catastrophic forgetting phenomenon of deep neural networks (DNNs), models trained in standard ways tend to forget what it has learned from previous tasks, especially when the new task is sufficiently different from the previous ones.

Continual Learning Model Selection

Cost-Effective Federated Learning Design

no code implementations15 Dec 2020 Bing Luo, Xiang Li, Shiqiang Wang, Jianwei Huang, Leandros Tassiulas

In this paper, we analyze how to design adaptive FL that optimally chooses these essential control variables to minimize the total cost while ensuring convergence.

Federated Learning

Robustness and Diversity Seeking Data-Free Knowledge Distillation

1 code implementation7 Nov 2020 Pengchao Han, Jihong Park, Shiqiang Wang, Yejun Liu

Knowledge distillation (KD) has enabled remarkable progress in model compression and knowledge transfer.

Knowledge Distillation Model Compression +1

Demystifying Why Local Aggregation Helps: Convergence Analysis of Hierarchical SGD

1 code implementation24 Oct 2020 Jiayi Wang, Shiqiang Wang, Rong-Rong Chen, Mingyue Ji

Furthermore, we extend our analytical approach based on "upward" and "downward" divergences to study the convergence for the general case of H-SGD with more than two levels, where the "sandwich behavior" still holds.

Federated Learning

Sharing Models or Coresets: A Study based on Membership Inference Attack

no code implementations6 Jul 2020 Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang, Kevin S. Chan

Distributed machine learning generally aims at training a global model based on distributed data without collecting all the data to a centralized location, where two different approaches have been proposed: collecting and aggregating local models (federated learning) and collecting and training over representative data summaries (coreset).

Federated Learning Inference Attack +1

Online Learning of Facility Locations

no code implementations6 Jul 2020 Stephen Pasteris, Ting He, Fabio Vitale, Shiqiang Wang, Mark Herbster

In this paper, we provide a rigorous theoretical investigation of an online learning version of the Facility Location problem which is motivated by emerging problems in real-world applications.

online learning

Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice

1 code implementation NeurIPS 2020 Shufan Wang, Jian Li, Shiqiang Wang

We obtain both deterministic and randomized online algorithms with provably improved performance when either a single or multiple ML predictions are used to make decisions.

Decision Making

Overcoming Noisy and Irrelevant Data in Federated Learning

no code implementations22 Jan 2020 Tiffany Tuor, Shiqiang Wang, Bong Jun Ko, Changchang Liu, Kin K. Leung

A challenge is that among the large variety of data collected at each client, it is likely that only a subset is relevant for a learning task while the rest of data has a negative impact on model training.

Federated Learning

Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach

no code implementations14 Jan 2020 Pengchao Han, Shiqiang Wang, Kin K. Leung

Then, with the goal of minimizing the overall training time, we propose a novel online learning formulation and algorithm for automatically determining the near-optimal communication and computation trade-off that is controlled by the degree of gradient sparsity.

Fairness Federated Learning +1

Model Pruning Enables Efficient Federated Learning on Edge Devices

2 code implementations26 Sep 2019 Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K. Leung, Leandros Tassiulas

To overcome this challenge, we propose PruneFL -- a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.

Federated Learning

Distilling On-Device Intelligence at the Network Edge

no code implementations16 Aug 2019 Jihong Park, Shiqiang Wang, Anis Elgabli, Seungeun Oh, Eunjeong Jeong, Han Cha, Hyesung Kim, Seong-Lyun Kim, Mehdi Bennis

Devices at the edge of wireless networks are the last mile data sources for machine learning (ML).

Online Collection and Forecasting of Resource Utilization in Large-Scale Distributed Systems

no code implementations22 May 2019 Tiffany Tuor, Shiqiang Wang, Kin K. Leung, Bong Jun Ko

Monitoring the conditions of these nodes is important for system management purposes, which, however, can be extremely resource demanding as this requires collecting local measurements of each individual node and constantly sending those measurements to a central controller.

Anomaly Detection Distributed Computing +2

Robust Coreset Construction for Distributed Machine Learning

no code implementations11 Apr 2019 Hanlin Lu, Ming-Ju Li, Ting He, Shiqiang Wang, Vijaykrishnan Narayanan, Kevin S. Chan

Coreset, which is a summary of the original dataset in the form of a small weighted set in the same sample space, provides a promising approach to enable machine learning over distributed data.

MaxHedge: Maximising a Maximum Online

no code implementations28 Oct 2018 Stephen Pasteris, Fabio Vitale, Kevin Chan, Shiqiang Wang, Mark Herbster

We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set.

online learning

Dynamic Service Migration in Mobile Edge Computing Based on Markov Decision Process

1 code implementation17 Jun 2015 Shiqiang Wang, Rahul Urgaonkar, Murtaza Zafer, Ting He, Kevin Chan, Kin K. Leung

In mobile edge computing, local edge servers can host cloud-based services, which reduces network overhead and latency but requires service migrations as users move to new locations.

Distributed, Parallel, and Cluster Computing Networking and Internet Architecture Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.