1 code implementation • 30 Mar 2024 • Linchen Qian, Jiasong Chen, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang
The deformed template reveals the lumbar spine geometry in an image.
1 code implementation • 17 Jan 2024 • Jiasong Chen, Linchen Qian, Linhai Ma, Timur Urakov, Weiyong Gu, Liang Liang
In this work, we proposed SymTC, an innovative lumbar spine MR image segmentation model that combines the strengths of Transformer and Convolutional Neural Network (CNN).
no code implementations • 2 Jun 2022 • Linhai Ma, Liang Liang
It is known that Deep Neural networks (DNNs) are vulnerable to adversarial attacks, and the adversarial robustness of DNNs could be improved by adding adversarial noises to training data (e. g., the standard adversarial training (SAT)).
1 code implementation • 19 Oct 2021 • Linhai Ma, Liang Liang
Electrocardiogram (ECG) is the most widely used diagnostic tool to monitor the condition of the human heart.
1 code implementation • 17 Sep 2020 • Liang Liang, Linhai Ma, Linchen Qian, Jiasong Chen
Deep neural networks (DNNs), especially convolutional neural networks, have achieved superior performance on image classification tasks.
1 code implementation • 8 Aug 2020 • Linhai Ma, Liang Liang
Thus, it is challenging and essential to improve robustness of DNNs against adversarial noises for ECG signal classification, a life-critical application.
1 code implementation • 19 May 2020 • Linhai Ma, Liang Liang
However, adversarial training samples with excessive noises can harm standard accuracy, which may be unacceptable for many medical image analysis applications.
2 code implementations • 18 May 2020 • Linhai Ma, Liang Liang
However, despite of the excellent performance in classification accuracy, it has been shown that DNNs are highly vulnerable to adversarial attacks: subtle changes in input of a DNN can lead to a wrong classification output with high confidence.