no code implementations • 27 Aug 2024 • Sirui Li, Li Lin, Yijin Huang, Pujin Cheng, Xiaoying Tang
In medical contexts, the imbalanced data distribution in long-tailed datasets, due to scarce labels for rare diseases, greatly impairs the diagnostic accuracy of deep learning models.
1 code implementation • 5 Aug 2024 • Yijin Huang, Pujin Cheng, Roger Tam, Xiaoying Tang
In this paper, we introduce Fine-grained Prompt Tuning plus (FPT+), a PETL method designed for high-resolution medical image classification, which significantly reduces memory consumption compared to other PETL methods.
no code implementations • 16 Jun 2024 • Tianyunxi Wei, Yijin Huang, Li Lin, Pujin Cheng, Sirui Li, Xiaoying Tang
Medical image datasets often exhibit long-tailed distributions due to the inherent challenges in medical data collection and annotation.
1 code implementation • 12 Mar 2024 • Yijin Huang, Pujin Cheng, Roger Tam, Xiaoying Tang
To achieve this, we first freeze the weights of the LPM and construct a learnable lightweight side network.
1 code implementation • 27 Feb 2024 • Li Lin, Yixiang Liu, Jiewei Wu, Pujin Cheng, Zhiyuan Cai, Kenneth K. Y. Wong, Xiaoying Tang
In such context, we propose a novel personalized FL framework with learnable prompt and aggregation (FedLPPA) to uniformly leverage heterogeneous weak supervision for medical image segmentation.
no code implementations • 13 Dec 2023 • Shiyun Chen, Li Lin, Pujin Cheng, Xiaoying Tang
Recently, Segment Anything Model (SAM) has shown promising performance in some medical image segmentation tasks, but it performs poorly for liver tumor segmentation.
1 code implementation • 12 Dec 2023 • Kai Pan, Linyang Li, Li Lin, Pujin Cheng, Junyan Lyu, Lei Xi, Xiaoyin Tang
Recently, there is a trend to incorporate deep learning into the scanning process to further increase the scanning speed. Yet, most such attempts are performed for raster scanning while those for rotational scanning are relatively rare.
1 code implementation • ICCV 2023 • Pujin Cheng, Li Lin, Junyan Lyu, Yijin Huang, Wenhan Luo, Xiaoying Tang
In this paper, we present a prototype representation learning framework incorporating both global and local alignment between medical images and reports.
no code implementations • 19 May 2023 • Huaqing He, Li Lin, Zhiyuan Cai, Pujin Cheng, Xiaoying Tang
To address these issues, we propose a prior guided multi-task transformer framework for joint OD/OC segmentation and fovea detection, named JOINEDTrans.
no code implementations • 20 Dec 2022 • Juntao Chen, Li Lin, Pujin Cheng, Yijin Huang, Xiaoying Tang
Medical image quality assessment (MIQA) is a vital prerequisite in various medical image analysis applications.
1 code implementation • 11 Dec 2022 • Li Lin, Linkai Peng, Huaqing He, Pujin Cheng, Jiewei Wu, Kenneth K. Y. Wong, Xiaoying Tang
With only one noisy skeleton annotation (respectively 0. 14\%, 0. 03\%, 1. 40\%, and 0. 65\% of the full annotation), YoloCurvSeg achieves more than 97\% of the fully-supervised performance on each dataset.
1 code implementation • 20 Oct 2022 • Yijin Huang, Junyan Lyu, Pujin Cheng, Roger Tam, Xiaoying Tang
Specifically, two saliency-guided learning tasks are employed in SSiT: (1) Saliency-guided contrastive learning is conducted based on the momentum contrast, wherein fundus images' saliency maps are utilized to remove trivial patches from the input sequences of the momentum-updated key encoder.
1 code implementation • 27 Jul 2022 • Junyan Lyu, Yiqi Zhang, Yijin Huang, Li Lin, Pujin Cheng, Xiaoying Tang
To address this issue, we propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG).
no code implementations • 14 Mar 2022 • Ziqi Huang, Li Lin, Pujin Cheng, Kai Pan, Xiaoying Tang
Furthermore, with only 5% paired data, the proposed DS3-Net achieves competitive performance with state-of-theart image translation methods utilizing 100% paired data, delivering an average SSIM of 0. 8947 and an average PSNR of 23. 60.
no code implementations • 9 Mar 2022 • Ziqi Huang, Li Lin, Pujin Cheng, Linkai Peng, Xiaoying Tang
As such, it is clinically meaningful to develop a method to synthesize unavailable modalities which can also be used as additional inputs to downstream tasks (e. g., brain tumor segmentation) for performance enhancing.
1 code implementation • 7 Mar 2022 • Linkai Peng, Li Lin, Pujin Cheng, Huaqing He, Xiaoying Tang
Afterwards, knowledge distillation is performed to iteratively distill different domain knowledge from teachers to a generic student.
1 code implementation • 13 Jan 2022 • Linkai Peng, Li Lin, Pujin Cheng, Ziqi Huang, Xiaoying Tang
The two models use labeled data (together with the corresponding transferred images) for supervised learning and perform collaborative consistency learning on unlabeled data.
2 code implementations • 27 Oct 2021 • Yijin Huang, Li Lin, Pujin Cheng, Junyan Lyu, Roger Tam, Xiaoying Tang
To identify the key components in a standard deep learning framework (ResNet-50) for DR grading, we systematically analyze the impact of several major components.
1 code implementation • 29 Sep 2021 • Huilin Yang, Junyan Lyu, Pujin Cheng, Roger Tam, Xiaoying Tang
We innovatively propose a flexible and consistent cross-annotation face alignment framework, LDDMM-Face, the key contribution of which is a deformation layer that naturally embeds facial geometry in a diffeomorphic way.
no code implementations • 2 Aug 2021 • Huilin Yang, Junyan Lyu, Pujin Cheng, Xiaoying Tang
Instead of predicting facial landmarks via heatmap or coordinate regression, we formulate this task in a diffeomorphic registration manner and predict momenta that uniquely parameterize the deformation between initial boundary and true boundary, and then perform large deformation diffeomorphic metric mapping (LDDMM) simultaneously for curve and landmark to localize the facial landmarks.
Ranked #24 on Face Alignment on WFLW
2 code implementations • 17 Jul 2021 • Yijin Huang, Li Lin, Pujin Cheng, Junyan Lyu, Xiaoying Tang
Instead of taking entire images as the input in the common contrastive learning scheme, lesion patches are employed to encourage the feature extractor to learn representations that are highly discriminative for DR grading.
1 code implementation • 10 Jul 2021 • Li Lin, Zhonghua Wang, Jiewei Wu, Yijin Huang, Junyan Lyu, Pujin Cheng, Jiong Wu, Xiaoying Tang
Moreover, both low-level and high-level features from the aforementioned three branches, including shape, size, boundary, and signed directional distance map of FAZ, are fused hierarchically with features from the diagnostic classifier.