Search Results for author: Cheng Bian

Found 17 papers, 3 papers with code

Label-efficient Hybrid-supervised Learning for Medical Image Segmentation

no code implementations10 Mar 2022 Junwen Pan, Qi Bi, Yanzhan Yang, Pengfei Zhu, Cheng Bian

Due to the lack of expertise for medical image annotation, the investigation of label-efficient methodology for medical image segmentation becomes a heated topic.

Medical Image Segmentation Semantic Segmentation

Multi-Anchor Active Domain Adaptation for Semantic Segmentation

1 code implementation ICCV 2021 Munan Ning, Donghuan Lu, Dong Wei, Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Yefeng Zheng

Unsupervised domain adaption has proven to be an effective approach for alleviating the intensive workload of manual annotation by aligning the synthetic source-domain data and the real-world target-domain samples.

Active Learning Domain Adaptation +1

A New Bidirectional Unsupervised Domain Adaptation Segmentation Framework

no code implementations18 Aug 2021 Munan Ning, Cheng Bian, Dong Wei, Chenglang Yuan, Yaohua Wang, Yang Guo, Kai Ma, Yefeng Zheng

Domain shift happens in cross-domain scenarios commonly because of the wide gaps between different domains: when applying a deep learning model well-trained in one domain to another target domain, the model usually performs poorly.

Representation Learning Unsupervised Domain Adaptation

Ensembled ResUnet for Anatomical Brain Barriers Segmentation

no code implementations29 Dec 2020 Munan Ning, Cheng Bian, Chenglang Yuan, Kai Ma, Yefeng Zheng

However, due to the visual and anatomical differences between different modalities, the accurate segmentation of brain structures becomes challenging.

TR-GAN: Topology Ranking GAN with Triplet Loss for Retinal Artery/Vein Classification

no code implementations29 Jul 2020 Wenting Chen, Shuang Yu, Junde Wu, Kai Ma, Cheng Bian, Chunyan Chu, Linlin Shen, Yefeng Zheng

A topology ranking discriminator based on ordinal regression is proposed to rank the topological connectivity level of the ground-truth, the generated A/V mask and the intentionally shuffled mask.

Classification General Classification

Difficulty-aware Glaucoma Classification with Multi-Rater Consensus Modeling

no code implementations29 Jul 2020 Shuang Yu, Hong-Yu Zhou, Kai Ma, Cheng Bian, Chunyan Chu, Hanruo Liu, Yefeng Zheng

However, when being used for model training, only the final ground-truth label is utilized, while the critical information contained in the raw multi-rater gradings regarding the image being an easy/hard case is discarded.

Classification General Classification

A Macro-Micro Weakly-supervised Framework for AS-OCT Tissue Segmentation

no code implementations20 Jul 2020 Munan Ning, Cheng Bian, Donghuan Lu, Hong-Yu Zhou, Shuang Yu, Chenglang Yuan, Yang Guo, Yaohua Wang, Kai Ma, Yefeng Zheng

Primary angle closure glaucoma (PACG) is the leading cause of irreversible blindness among Asian people.

Comparing to Learn: Surpassing ImageNet Pretraining on Radiographs By Comparing Image Representations

1 code implementation15 Jul 2020 Hong-Yu Zhou, Shuang Yu, Cheng Bian, Yifan Hu, Kai Ma, Yefeng Zheng

In deep learning era, pretrained models play an important role in medical image analysis, in which ImageNet pretraining has been widely adopted as the best way.

Identification of primary angle-closure on AS-OCT images with Convolutional Neural Networks

no code implementations23 Oct 2019 Chenglang Yuan, Cheng Bian, Hongjian Kang, Shu Liang, Kai Ma, Yefeng Zheng

In this paper, we propose an efficient and accurate end-to-end architecture for angle-closure classification and scleral spur localization.

General Classification

Uncertainty-Guided Domain Alignment for Layer Segmentation in OCT Images

no code implementations22 Aug 2019 Jiexiang Wang, Cheng Bian, Meng Li, Xin Yang, Kai Ma, Wenao Ma, Jin Yuan, Xinghao Ding, Yefeng Zheng

Automatic and accurate segmentation for retinal and choroidal layers of Optical Coherence Tomography (OCT) is crucial for detection of various ocular diseases.

Cannot find the paper you are looking for? You can Submit a new open access paper.