Search Results for author: Linhai Ma

Found 8 papers, 7 papers with code

SymTC: A Symbiotic Transformer-CNN Net for Instance Segmentation of Lumbar Spine MRI

1 code implementation17 Jan 2024 Jiasong Chen, Linchen Qian, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang

In this work, we proposed SymTC, an innovative lumbar spine MR image segmentation model that combines the strengths of Transformer and Convolutional Neural Network (CNN).

Data Augmentation Image Segmentation +3

Adaptive Adversarial Training to Improve Adversarial Robustness of DNNs for Medical Image Segmentation and Detection

no code implementations2 Jun 2022 Linhai Ma, Liang Liang

It is known that Deep Neural networks (DNNs) are vulnerable to adversarial attacks, and the adversarial robustness of DNNs could be improved by adding adversarial noises to training data (e. g., the standard adversarial training (SAT)).

Adversarial Robustness Image Segmentation +4

An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder

1 code implementation17 Sep 2020 Liang Liang, Linhai Ma, Linchen Qian, Jiasong Chen

Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks.

Dimensionality Reduction General Classification +1

Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length

1 code implementation8 Aug 2020 Linhai Ma, Liang Liang

Thus, it is challenging and essential to improve robustness of DNNs against adversarial noises for ECG signal classification, a life-critical application.

Classification ECG Classification +1

Increasing-Margin Adversarial (IMA) Training to Improve Adversarial Robustness of Neural Networks

1 code implementation19 May 2020 Linhai Ma, Liang Liang

However, adversarial training samples with excessive noises can harm standard accuracy, which may be unacceptable for many medical image analysis applications.

Adversarial Robustness General Classification +4

Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective

2 code implementations18 May 2020 Linhai Ma, Liang Liang

However, despite of the excellent performance in classification accuracy, it has been shown that DNNs are highly vulnerable to adversarial attacks: subtle changes in input of a DNN can lead to a wrong classification output with high confidence.

Adversarial Attack Adversarial Robustness +2

Cannot find the paper you are looking for? You can Submit a new open access paper.