Search Results for author: Yuyi Mao

Found 28 papers, 4 papers with code

Decentralizing Coherent Joint Transmission Precoding via Fast ADMM with Deterministic Equivalents

no code implementations28 Mar 2024 Xinyu Bian, Yuhao Liu, Yizhou Xu, Tianqi Hou, Wenjie Wang, Yuyi Mao, Jun Zhang

Simulation results demonstrate the effectiveness of our proposed decentralized precoding scheme, which achieves performance similar to the optimal centralized precoding scheme.

Decentralizing Coherent Joint Transmission Precoding via Deterministic Equivalents

no code implementations15 Mar 2024 Yuhao Liu, Xinyu Bian, Yizhou Xu, Tianqi Hou, Wenjie Wang, Yuyi Mao, Jun Zhang

In order to control the inter-cell interference for a multi-cell multi-user multiple-input multiple-output network, we consider the precoder design for coordinated multi-point with downlink coherent joint transmission.

Joint Activity-Delay Detection and Channel Estimation for Asynchronous Massive Random Access: A Free Probability Theory Approach

no code implementations28 Feb 2024 Xinyu Bian, Yuyi Mao, Jun Zhang

Grant-free random access (RA) has been recognized as a promising solution to support massive connectivity due to the removal of the uplink grant request procedures.

Action Detection Activity Detection

Approximate Message Passing-Enhanced Graph Neural Network for OTFS Data Detection

no code implementations15 Feb 2024 Wenhao Zhuang, Yuyi Mao, Hengtao He, Lei Xie, Shenghui Song, Yao Ge, Zhi Ding

Orthogonal time frequency space (OTFS) modulation has emerged as a promising solution to support high-mobility wireless communications, for which, cost-effective data detectors are critical.

Green Edge AI: A Contemporary Survey

no code implementations1 Dec 2023 Yuyi Mao, Xianghao Yu, Kaibin Huang, Ying-Jun Angela Zhang, Jun Zhang

Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference.

How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels

no code implementations25 Oct 2023 Linping Qu, Shenghui Song, Chi-Ying Tsui, Yuyi Mao

It is also shown that the uplink communication in FL can tolerate a higher bit error rate (BER) than downlink communication, and this difference is quantified by a proposed formula.

Federated Learning Privacy Preserving

FedCiR: Client-Invariant Representation Learning for Federated Non-IID Features

no code implementations30 Aug 2023 Zijian Li, Zehong Lin, Jiawei Shao, Yuyi Mao, Jun Zhang

However, devices often have non-independent and identically distributed (non-IID) data, meaning their local data distributions can vary significantly.

Federated Learning Representation Learning

Feature Matching Data Synthesis for Non-IID Federated Learning

no code implementations9 Aug 2023 Zijian Li, Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary, with which the synthetic data not only improve the model generalization but also erase the information of real features.

Data Augmentation Federated Learning +1

MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates

1 code implementation21 Jun 2023 Yuchang Sun, Yuyi Mao, Jun Zhang

Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the model updates need to be collected at a server.

Federated Learning Privacy Preserving

Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air Federated Learning

no code implementations26 May 2023 Yuchang Sun, Zehong Lin, Yuyi Mao, Shi Jin, Jun Zhang

In this paper, we propose a probabilistic device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise, where each device is scheduled according to a certain probability and its model update is reweighted using this probability in aggregation.

Federated Learning Privacy Preserving +1

Joint Activity-Delay Detection and Channel Estimation for Asynchronous Massive Random Access

no code implementations21 May 2023 Xinyu Bian, Yuyi Mao, Jun Zhang

Most existing studies on joint activity detection and channel estimation for grant-free massive random access (RA) systems assume perfect synchronization among all active users, which is hard to achieve in practice.

Action Detection Activity Detection

Grant-free Massive Random Access with Retransmission: Receiver Optimization and Performance Analysis

no code implementations12 Apr 2023 Xinyu Bian, Yuyi Mao, Jun Zhang

Specifically, by jointly leveraging the user activity correlation between adjacent transmission blocks and the historical channel estimation results, we first develop an activity-correlation-aware receiver for grant-free massive RA systems with retransmission based on the correlated approximate message passing (AMP) algorithm.

Action Detection Activity Detection

Stochastic Coded Federated Learning: Theoretical Analysis and Incentive Mechanism Design

no code implementations8 Nov 2022 Yuchang Sun, Jiawei Shao, Yuyi Mao, Songze Li, Jun Zhang

During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices.

Federated Learning Privacy Preserving

Resource-Constrained Edge AI with Early Exit Prediction

no code implementations15 Jun 2022 Rongkang Dong, Yuyi Mao, Jun Zhang

In this paper, we propose an early exit prediction mechanism to reduce the on-device computation overhead in a device-edge co-inference system supported by early-exit networks.

Federated Learning with GAN-based Data Synthesis for Non-IID Clients

no code implementations11 Jun 2022 Zijian Li, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

A combination of the local private dataset and synthetic dataset with confident pseudo labels leads to nearly identical data distributions among clients, which improves the consistency among local models and benefits the global aggregation.

Federated Learning Generative Adversarial Network +1

Stochastic Coded Federated Learning with Convergence and Privacy Guarantees

no code implementations25 Jan 2022 Yuchang Sun, Jiawei Shao, Songze Li, Yuyi Mao, Jun Zhang

Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework, where many clients collaboratively train a machine learning model by exchanging model updates with a parameter server instead of sharing their raw data.

Federated Learning Privacy Preserving

Semi-Decentralized Federated Edge Learning with Data and Device Heterogeneity

no code implementations20 Dec 2021 Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

By exploiting the low-latency communication among edge servers for efficient model sharing, SD-FEEL can incorporate more training data, while enjoying much lower latency compared with conventional federated learning.

Federated Learning Privacy Preserving

Asynchronous Semi-Decentralized Federated Edge Learning for Heterogeneous Clients

no code implementations9 Dec 2021 Yuchang Sun, Jiawei Shao, Yuyi Mao, Jun Zhang

Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks.

Privacy Preserving

Task-Oriented Communication for Multi-Device Cooperative Edge Inference

2 code implementations1 Sep 2021 Jiawei Shao, Yuyi Mao, Jun Zhang

To enable low-latency cooperative inference, we propose a learning-based communication scheme that optimizes local feature extraction and distributed feature encoding in a task-oriented manner, i. e., to remove data redundancy and transmit information that is essential for the downstream inference task rather than reconstructing the data samples at the edge server.

Communication-Computation Efficient Device-Edge Co-Inference via AutoML

no code implementations30 Aug 2021 Xinjie Zhang, Jiawei Shao, Yuyi Mao, Jun Zhang

Device-edge co-inference, which partitions a deep neural network between a resource-constrained mobile device and an edge server, recently emerges as a promising paradigm to support intelligent mobile applications.

AutoML Feature Compression +1

Joint Activity Detection, Channel Estimation, and Data Decoding for Grant-free Massive Random Access

no code implementations12 Jul 2021 Xinyu Bian, Yuyi Mao, Jun Zhang

In particular, the common sparsity pattern in the received pilot and data signal has been ignored in most existing studies, and auxiliary information of channel decoding has not been utilized for user activity detection.

Action Detection Activity Detection

Joint Activity Detection and Data Decoding in Massive Random Access via a Turbo Receiver

no code implementations26 Apr 2021 Xinyu Bian, Yuyi Mao, Jun Zhang

In this paper, we propose a turbo receiver for joint activity detection and data decoding in grant-free massive random access, which iterates between a detector and a belief propagation (BP)-based channel decoder.

Action Detection Activity Detection

Semi-Decentralized Federated Edge Learning for Fast Convergence on Non-IID Data

no code implementations26 Apr 2021 Yuchang Sun, Jiawei Shao, Yuyi Mao, Jessie Hui Wang, Jun Zhang

Federated edge learning (FEEL) has emerged as an effective approach to reduce the large communication latency in Cloud-based machine learning solutions, while preserving data privacy.

Federated Learning

Supporting More Active Users for Massive Access via Data-assisted Activity Detection

no code implementations17 Feb 2021 Xinyu Bian, Yuyi Mao, Jun Zhang

Massive machine-type communication (mMTC) has been regarded as one of the most important use scenarios in the fifth generation (5G) and beyond wireless networks, which demands scalable access for a large number of devices.

Action Detection Activity Detection

Learning Task-Oriented Communication for Edge Inference: An Information Bottleneck Approach

1 code implementation8 Feb 2021 Jiawei Shao, Yuyi Mao, Jun Zhang

Extensive experiments evidence that the proposed task-oriented communication system achieves a better rate-distortion tradeoff than baseline methods and significantly reduces the feature transmission latency in dynamic channel conditions.

Informativeness

Branchy-GNN: a Device-Edge Co-Inference Framework for Efficient Point Cloud Processing

1 code implementation27 Oct 2020 Jiawei Shao, Haowei Zhang, Yuyi Mao, Jun Zhang

The recent advancements of three-dimensional (3D) data acquisition devices have spurred a new breed of applications that rely on point cloud data processing.

Distributed, Parallel, and Cluster Computing

Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices

no code implementations18 May 2016 Yuyi Mao, Jun Zhang, Khaled B. Letaief

Sample simulation results shall be presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.

Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.