Search Results for author: Jiawei Shao

Found 24 papers, 10 papers with code

Content-aware Masked Image Modeling Transformer for Stereo Image Compression

no code implementations13 Mar 2024 Xinjie Zhang, Shenyuan Gao, Zhening Liu, Jiawei Shao, Xingtong Ge, Dailan He, Tongda Xu, Yan Wang, Jun Zhang

Existing learning-based stereo image codec adopt sophisticated transformation with simple entropy models derived from single image codecs to encode latent representations.

Image Compression

FedCiR: Client-Invariant Representation Learning for Federated Non-IID Features

no code implementations30 Aug 2023 Zijian Li, Zehong Lin, Jiawei Shao, Yuyi Mao, Jun Zhang

However, devices often have non-independent and identically distributed (non-IID) data, meaning their local data distributions can vary significantly.

Federated Learning Representation Learning

Feature Matching Data Synthesis for Non-IID Federated Learning

no code implementations9 Aug 2023 Zijian Li, Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary, with which the synthetic data not only improve the model generalization but also erase the information of real features.

Data Augmentation Federated Learning +1

Large Language Models Empowered Autonomous Edge AI for Connected Intelligence

no code implementations6 Jul 2023 Yifei Shen, Jiawei Shao, Xinjie Zhang, Zehong Lin, Hao Pan, Dongsheng Li, Jun Zhang, Khaled B. Letaief

The evolution of wireless networks gravitates towards connected intelligence, a concept that envisions seamless interconnectivity among humans, objects, and intelligence in a hyper-connected cyber-physical world.

Code Generation Federated Learning +3

Task-Oriented Communication with Out-of-Distribution Detection: An Information Bottleneck Framework

1 code implementation21 May 2023 Hongru Li, Wentao Yu, Hengtao He, Jiawei Shao, Shenghui Song, Jun Zhang, Khaled B. Letaief

Task-oriented communication is an emerging paradigm for next-generation communication networks, which extracts and transmits task-relevant information, instead of raw data, for downstream applications.

Informativeness Out-of-Distribution Detection

Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher

1 code implementation4 Apr 2023 Jiawei Shao, Fangzhao Wu, Jun Zhang

While federated learning is promising for privacy-preserving collaborative learning without revealing local data, it remains vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.

Federated Learning Knowledge Distillation +2

Low-complexity Deep Video Compression with A Distributed Coding Architecture

1 code implementation21 Mar 2023 Xinjie Zhang, Jiawei Shao, Jun Zhang

This has inspired a distributed coding architecture aiming at reducing the encoding complexity.

Motion Estimation Video Compression

AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent Reinforcement Learning

no code implementations24 Feb 2023 Xuefeng Wang, Xinran Li, Jiawei Shao, Jun Zhang

Learning communication strategies in cooperative multi-agent reinforcement learning (MARL) has recently attracted intensive attention.

Multi-agent Reinforcement Learning reinforcement-learning +2

Task-Oriented Communication for Edge Video Analytics

1 code implementation25 Nov 2022 Jiawei Shao, Xinjie Zhang, Jun Zhang

With the development of artificial intelligence (AI) techniques and the increasing popularity of camera-equipped devices, many edge video analytics applications are emerging, calling for the deployment of computation-intensive AI models at the network edge.

Informativeness

Stochastic Coded Federated Learning: Theoretical Analysis and Incentive Mechanism Design

no code implementations8 Nov 2022 Yuchang Sun, Jiawei Shao, Yuyi Mao, Songze Li, Jun Zhang

During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices.

Federated Learning Privacy Preserving

DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients via Secret Data Sharing

no code implementations6 Oct 2022 Jiawei Shao, Yuchang Sun, Songze Li, Jun Zhang

Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.

Federated Learning

Federated Learning with GAN-based Data Synthesis for Non-IID Clients

no code implementations11 Jun 2022 Zijian Li, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

A combination of the local private dataset and synthetic dataset with confident pseudo labels leads to nearly identical data distributions among clients, which improves the consistency among local models and benefits the global aggregation.

Federated Learning Generative Adversarial Network +1

Stochastic Coded Federated Learning with Convergence and Privacy Guarantees

no code implementations25 Jan 2022 Yuchang Sun, Jiawei Shao, Songze Li, Yuyi Mao, Jun Zhang

Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework, where many clients collaboratively train a machine learning model by exchanging model updates with a parameter server instead of sharing their raw data.

Federated Learning Privacy Preserving

Semi-Decentralized Federated Edge Learning with Data and Device Heterogeneity

no code implementations20 Dec 2021 Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

By exploiting the low-latency communication among edge servers for efficient model sharing, SD-FEEL can incorporate more training data, while enjoying much lower latency compared with conventional federated learning.

Federated Learning Privacy Preserving

Asynchronous Semi-Decentralized Federated Edge Learning for Heterogeneous Clients

no code implementations9 Dec 2021 Yuchang Sun, Jiawei Shao, Yuyi Mao, Jun Zhang

Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks.

Privacy Preserving

Task-Oriented Communication for Multi-Device Cooperative Edge Inference

2 code implementations1 Sep 2021 Jiawei Shao, Yuyi Mao, Jun Zhang

To enable low-latency cooperative inference, we propose a learning-based communication scheme that optimizes local feature extraction and distributed feature encoding in a task-oriented manner, i. e., to remove data redundancy and transmit information that is essential for the downstream inference task rather than reconstructing the data samples at the edge server.

Communication-Computation Efficient Device-Edge Co-Inference via AutoML

no code implementations30 Aug 2021 Xinjie Zhang, Jiawei Shao, Yuyi Mao, Jun Zhang

Device-edge co-inference, which partitions a deep neural network between a resource-constrained mobile device and an edge server, recently emerges as a promising paradigm to support intelligent mobile applications.

AutoML Feature Compression +1

Semi-Decentralized Federated Edge Learning for Fast Convergence on Non-IID Data

no code implementations26 Apr 2021 Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine learning solutions, while preserving data privacy.

Federated Learning

Learning Task-Oriented Communication for Edge Inference: An Information Bottleneck Approach

1 code implementation8 Feb 2021 Jiawei Shao, Yuyi Mao, Jun Zhang

Extensive experiments evidence that the proposed task-oriented communication system achieves a better rate-distortion tradeoff than baseline methods and significantly reduces the feature transmission latency in dynamic channel conditions.

Informativeness

Branchy-GNN: a Device-Edge Co-Inference Framework for Efficient Point Cloud Processing

1 code implementation27 Oct 2020 Jiawei Shao, Haowei Zhang, Yuyi Mao, Jun Zhang

The recent advancements of three-dimensional (3D) data acquisition devices have spurred a new breed of applications that rely on point cloud data processing.

Distributed, Parallel, and Cluster Computing

Communication-Computation Trade-Off in Resource-Constrained Edge Inference

1 code implementation3 Jun 2020 Jiawei Shao, Jun Zhang

The recent breakthrough in artificial intelligence (AI), especially deep neural networks (DNNs), has affected every branch of science and technology.

Edge-computing Model Compression

BottleNet++: An End-to-End Approach for Feature Compression in Device-Edge Co-Inference Systems

1 code implementation31 Oct 2019 Jiawei Shao, Jun Zhang

By exploiting the strong sparsity and the fault-tolerant property of the intermediate feature in a deep neural network (DNN), BottleNet++ achieves a much higher compression ratio than existing methods.

Edge-computing Feature Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.