Search Results for author: Liang Liang

Found 13 papers, 8 papers with code

Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection

no code implementations2 Jun 2022 Linhai Ma, Liang Liang

It is known that Deep Neural networks (DNNs) are vulnerable to adversarial attacks, and the adversarial robustness of DNNs could be improved by adding adversarial noises to training data (e. g., the standard adversarial training (SAT)).

Adversarial Robustness Image Segmentation +4

Optimal Distribution Design for Irregular Repetition Slotted ALOHA with Multi-Packet Reception

no code implementations15 Oct 2021 Zhengchuan Chen, Yifan Feng, Chundie Feng, Liang Liang, Yunjian Jia, Tony Q. S. Quek

Associated with multi-packet reception at the access point, irregular repetition slotted ALOHA (IRSA) holds a great potential in improving the access capacity of massive machine type communication systems.

Flexible Clustered Federated Learning for Client-Level Data Distribution Shift

1 code implementation22 Aug 2021 Moming Duan, Duo Liu, Xinyuan Ji, Yu Wu, Liang Liang, Xianzhang Chen, Yujuan Tan

Federated Learning (FL) enables the multiple participating devices to collaboratively contribute to a global neural network model while keeping the training data locally.

Federated Learning

Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images

2 code implementations4 Feb 2021 Jiasong Chen, Linchen Qian, Timur Urakov, Weiyong Gu, Liang Liang

We utilized the PGD-based algorithm for IND adversarial attacks and extended it for OOD adversarial attacks to generate OOD adversarial samples for model testing.

Adversarial Robustness Data Augmentation

CQ-VAE: Coordinate Quantized VAE for Uncertainty Estimation with Application to Disk Shape Analysis from Lumbar Spine MRI Images

no code implementations17 Oct 2020 Linchen Qian, Jiasong Chen, Timur Urakov, Weiyong Gu, Liang Liang

In this paper, we propose a powerful generative model to learn a representation of ambiguity and to generate probabilistic outputs.


FedGroup: Efficient Clustered Federated Learning via Decomposed Data-Driven Measure

2 code implementations14 Oct 2020 Moming Duan, Duo Liu, Xinyuan Ji, Renping Liu, Liang Liang, Xianzhang Chen, Yujuan Tan

In this paper, we propose a novel clustered federated learning (CFL) framework FedGroup, in which we 1) group the training of clients based on the similarities between the clients' optimization directions for high training performance; 2) construct a new data-driven distance measure to improve the efficiency of the client clustering procedure.

Federated Learning

An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder

1 code implementation17 Sep 2020 Liang Liang, Linhai Ma, Linchen Qian, Jiasong Chen

Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks.

Dimensionality Reduction General Classification +1

Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length

1 code implementation8 Aug 2020 Linhai Ma, Liang Liang

Thus, it is challenging and essential to improve robustness of DNNs against adversarial noises for ECG signal classification, a life-critical application.

Classification ECG Classification +1

Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks

no code implementations19 May 2020 Linhai Ma, Liang Liang

However, adversarial training samples with excessive noises can harm standard accuracy, which may be unacceptable for many medical image analysis applications.

Adversarial Robustness General Classification +4

Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective

2 code implementations18 May 2020 Linhai Ma, Liang Liang

However, despite of the excellent performance in classification accuracy, it has been shown that DNNs are highly vulnerable to adversarial attacks: subtle changes in input of a DNN can lead to a wrong classification output with high confidence.

Adversarial Attack Adversarial Robustness +2

Astraea: Self-balancing Federated Learning for Improving Classification Accuracy of Mobile Deep Learning Applications

1 code implementation2 Jul 2019 Moming Duan, Duo Liu, Xianzhang Chen, Yujuan Tan, Jinting Ren, Lei Qiao, Liang Liang

However, unlike the common training dataset, the data distribution of the edge computing system is imbalanced which will introduce biases in the model training and cause a decrease in accuracy of federated learning applications.

Data Augmentation Edge-computing +2

Cannot find the paper you are looking for? You can Submit a new open access paper.