no code implementations • 19 May 2023 • Huaqing He, Li Lin, Zhiyuan Cai, Pujin Cheng, Xiaoying Tang
To address these issues, we propose a prior guided multi-task transformer framework for joint OD/OC segmentation and fovea detection, named JOINEDTrans.
1 code implementation • 13 Apr 2023 • Yuanyuan Wei, Roger Tam, Xiaoying Tang
Recent applications of deep convolutional neural networks in medical imaging raise concerns about their interpretability.
1 code implementation • 12 Apr 2023 • Li Lin, Jiewei Wu, Yixiang Liu, Kenneth K. Y. Wong, Xiaoying Tang
The statistical heterogeneity (e. g., non-IID data and domain shifts) is a primary obstacle in FL, impairing the generalization performance of the global model.
1 code implementation • 8 Mar 2023 • Puijin Cheng, Li Lin, Yijin Huang, Huaqing He, Wenhan Luo, Xiaoying Tang
In this paper, we introduce a novel diffusion model based framework, named Learning Enhancement from Degradation (LED), for enhancing fundus images.
no code implementations • 29 Jan 2023 • Lin Wang, Zhichao Wang, Xiaoying Tang
Ensuring fairness is a crucial aspect of Federated Learning (FL), which enables the model to perform consistently across all clients.
no code implementations • 29 Jan 2023 • Yongxin Guo, Xiaoying Tang, Tao Lin
In this paper, we identify the learning challenges posed by the simultaneous occurrence of diverse distribution shifts and propose a clustering principle to overcome these challenges.
no code implementations • 26 Dec 2022 • Mowen Yin, Weikai Huang, Zhichao Liang, Quanying Liu, Xiaoying Tang
Our work supports that cortical morphological connectivity, which is constructed based on correlations across subjects' cortical thickness, may serve as a tool to study topological abnormalities in neurological disorders.
no code implementations • 20 Dec 2022 • Juntao Chen, Li Lin, Pujin Cheng, Yijin Huang, Xiaoying Tang
Medical image quality assessment (MIQA) is a vital prerequisite in various medical image analysis applications.
no code implementations • 11 Dec 2022 • Li Lin, Linkai Peng, Huaqing He, Pujin Cheng, Jiewei Wu, Kenneth K. Y. Wong, Xiaoying Tang
With only one noisy skeleton annotation (respectively 0. 14%, 0. 03%, 1. 40%, and 0. 65% of the full annotation), YoloCurvSeg achieves more than 97% of the fully-supervised performance on each dataset.
1 code implementation • 20 Oct 2022 • Yijin Huang, Junyan Lyu, Pujin Cheng, Roger Tam, Xiaoying Tang
Specifically, two saliency-guided learning tasks are employed in SSiT: (1) We conduct saliency-guided contrastive learning based on the momentum contrast, wherein we utilize fundus images' saliency maps to remove trivial patches from the input sequences of the momentum-updated key encoder.
no code implementations • 28 Jul 2022 • Xi Leng, Xiaoying Tang, Yatao Bian
Machine learning algorithms minimizing the average training loss usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts.
1 code implementation • 27 Jul 2022 • Junyan Lyu, Yiqi Zhang, Yijin Huang, Li Lin, Pujin Cheng, Xiaoying Tang
To address this issue, we propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG).
no code implementations • 27 May 2022 • Lin Wang, Yongxin Guo, Tao Lin, Xiaoying Tang
However, an inadequate client sampling scheme can lead to the selection of unrepresentative subsets, resulting in significant variance in model updates and slowed convergence.
no code implementations • 26 May 2022 • Yongxin Guo, Xiaoying Tang, Tao Lin
As a remedy, we propose FedDebias, a novel unified algorithm that reduces the local learning bias on features and classifiers to tackle these challenges.
no code implementations • 14 Mar 2022 • Ziqi Huang, Li Lin, Pujin Cheng, Kai Pan, Xiaoying Tang
Furthermore, with only 5% paired data, the proposed DS3-Net achieves competitive performance with state-of-theart image translation methods utilizing 100% paired data, delivering an average SSIM of 0. 8947 and an average PSNR of 23. 60.
no code implementations • 12 Mar 2022 • Weikai Huang, Yijin Huang, Xiaoying Tang
Then, MixUp is adopted to paste patches from the lesion bank at random positions in normal images to synthesize anomalous samples for training.
Semi-supervised Anomaly Detection
supervised anomaly detection
no code implementations • 9 Mar 2022 • Zhiyuan Cai, Li Lin, Huaqing He, Xiaoying Tang
We employ a Unified Patch Embedding module to replace the origin patch embedding module in ViT for jointly processing both 2D and 3D input images.
no code implementations • 9 Mar 2022 • Ziqi Huang, Li Lin, Pujin Cheng, Linkai Peng, Xiaoying Tang
As such, it is clinically meaningful to develop a method to synthesize unavailable modalities which can also be used as additional inputs to downstream tasks (e. g., brain tumor segmentation) for performance enhancing.
1 code implementation • 7 Mar 2022 • Linkai Peng, Li Lin, Pujin Cheng, Huaqing He, Xiaoying Tang
Afterwards, knowledge distillation is performed to iteratively distill different domain knowledge from teachers to a generic student.
1 code implementation • 1 Mar 2022 • Huaqing He, Li Lin, Zhiyuan Cai, Xiaoying Tang
At the coarse stage, we obtain the OD/OC coarse segmentation and the heatmap localization of fovea through a joint segmentation and detection module.
no code implementations • 28 Feb 2022 • Xiaoying Tang, Chenxi Sun, Suzhi Bi, Shuoyao Wang, Angela Yingjun Zhang
The rapid growth of electric vehicles (EVs) has promised a next-generation transportation system with reduced carbon emission.
no code implementations • 14 Feb 2022 • Junde Wu, Huihui Fang, Fei Li, Huazhu Fu, Fengbin Lin, Jiongcheng Li, Lexing Huang, Qinji Yu, Sifan Song, Xinxing Xu, Yanyu Xu, Wensai Wang, Lingxiao Wang, Shuai Lu, Huiqi Li, Shihua Huang, Zhichao Lu, Chubin Ou, Xifei Wei, Bingyuan Liu, Riadh Kobbi, Xiaoying Tang, Li Lin, Qiang Zhou, Qiang Hu, Hrvoje Bogunovic, José Ignacio Orlando, Xiulan Zhang, Yanwu Xu
However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment.
1 code implementation • 13 Jan 2022 • Linkai Peng, Li Lin, Pujin Cheng, Ziqi Huang, Xiaoying Tang
The two models use labeled data (together with the corresponding transferred images) for supervised learning and perform collaborative consistency learning on unlabeled data.
1 code implementation • 11 Jan 2022 • Zhiyuan Cai, Li Lin, Huaqing He, Xiaoying Tang
In this paper, we propose an efficient multi-modality supervised contrastive learning framework, named COROLLA, for glaucoma grading.
no code implementations • 25 Dec 2021 • Yongxin Guo, Tao Lin, Xiaoying Tang
Federated Learning (FL) is a learning paradigm that protects privacy by keeping client data on edge devices.
2 code implementations • 27 Oct 2021 • Yijin Huang, Li Lin, Pujin Cheng, Junyan Lyu, Roger Tam, Xiaoying Tang
To identify the key components in a standard deep learning framework (ResNet-50) for DR grading, we systematically analyze the impact of several major components.
1 code implementation • 29 Sep 2021 • Huilin Yang, Junyan Lyu, Pujin Cheng, Roger Tam, Xiaoying Tang
We innovatively propose a flexible and consistent cross-annotation face alignment framework, LDDMM-Face, the key contribution of which is a deformation layer that naturally embeds facial geometry in a diffeomorphic way.
no code implementations • 14 Sep 2021 • Pinyuan Zhong, Yue Zhang, Xiaoying Tang
The hippocampal surface was then generated from the mean shape and the shape variation parameters.
no code implementations • 2 Aug 2021 • Huilin Yang, Junyan Lyu, Pujin Cheng, Xiaoying Tang
Instead of predicting facial landmarks via heatmap or coordinate regression, we formulate this task in a diffeomorphic registration manner and predict momenta that uniquely parameterize the deformation between initial boundary and true boundary, and then perform large deformation diffeomorphic metric mapping (LDDMM) simultaneously for curve and landmark to localize the facial landmarks.
2 code implementations • 17 Jul 2021 • Yijin Huang, Li Lin, Pujin Cheng, Junyan Lyu, Xiaoying Tang
Instead of taking entire images as the input in the common contrastive learning scheme, lesion patches are employed to encourage the feature extractor to learn representations that are highly discriminative for DR grading.
1 code implementation • 10 Jul 2021 • Li Lin, Zhonghua Wang, Jiewei Wu, Yijin Huang, Junyan Lyu, Pujin Cheng, Jiong Wu, Xiaoying Tang
Moreover, both low-level and high-level features from the aforementioned three branches, including shape, size, boundary, and signed directional distance map of FAZ, are fused hierarchically with features from the diagnostic classifier.
1 code implementation • 26 Aug 2020 • Pengyi Zhang, Yunxin Zhong, Yulin Deng, Xiaoying Tang, Xiaoqiong Li
The infection-aware DRR generator is able to produce DRRs with adjustable strength of radiological signs of COVID-19 infection, and generate pixel-level infection annotations that match the DRRs precisely.
1 code implementation • arXiv:2006.12220 2020 • Pengyi Zhang, Yunxin Zhong, Xiaoying Tang, Yunlin Deng, Xiaoqiong Li
To address this problem, we explore the feasibility of learning deep models for COVID-19 diagnosis from a single radiological image by resorting to synthesizing diverse radiological images.
1 code implementation • 1 Aug 2019 • Pengyi Zhang, Yunxin Zhong, Yulin Deng, Xiaoying Tang, Xiaoqiong Li
In order to accelerate the clinical usage of biomedical image analysis based on deep learning techniques, we intentionally expand this survey to include the explanation methods for deep models that are important to clinical decision making.
no code implementations • 5 Jan 2019 • Jiong Wu, Xiaoying Tang
To address this limitation, we trained a 3D FCN model for each ROI using patches of adaptive size and embedded outputs of the convolutional layers in the deconvolutional layers to further capture the local and global context patterns.
no code implementations • 21 Jul 2018 • Yue Zhang, Wanli Chen, Yi-fan Chen, Xiaoying Tang
Random initialization is usally used to initialize the model weights in the U-net.
no code implementations • 12 Jul 2018 • Wanli Chen, Yue Zhang, Junjun He, Yu Qiao, Yi-fan Chen, Hongjian Shi, Xiaoying Tang
To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size.