no code implementations • 17 Nov 2022 • Zhongying Deng, Yanqi Chen, Lihao Liu, Shujun Wang, Rihuan Ke, Carola-Bibiane Schonlieb, Angelica I Aviles-Rivero
Firstly, TrafficCAM provides both pixel-level and instance-level semantic labelling along with a large range of types of vehicles and pedestrians.
1 code implementation • 12 Oct 2022 • Fuying Wang, Yuyin Zhou, Shujun Wang, Varut Vardhanabhuti, Lequan Yu
In this paper, we present a novel Multi-Granularity Cross-modal Alignment (MGCA) framework for generalized medical visual representation learning by harnessing the naturally exhibited semantic correspondences between medical image and radiology reports at three different levels, i. e., pathological region-level, instance-level, and disease-level.
no code implementations • 18 Sep 2022 • Yanqi Cheng, Lihao Liu, Shujun Wang, Yueming Jin, Carola-Bibiane Schönlieb, Angelica I. Aviles-Rivero
This is the question that we address in this work.
1 code implementation • 13 Sep 2021 • Yijun Yang, Shujun Wang, Lei Zhu, Pheng-Ann Heng, Lequan Yu
Particularly, for the Extrinsic Consistency, we leverage the knowledge across multiple source domains to enforce data-level consistency.
1 code implementation • 7 Jan 2021 • Kang Li, Shujun Wang, Lequan Yu, Pheng-Ann Heng
In this way, the dual teacher models would transfer acquired inter- and intra-domain knowledge to the student model for further integration and exploitation.
no code implementations • 13 Oct 2020 • Shujun Wang, Lequan Yu, Kang Li, Xin Yang, Chi-Wing Fu, Pheng-Ann Heng
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains to make the semantic features more discriminative.
no code implementations • 13 Oct 2020 • Shujun Wang, Yaxi Zhu, Lequan Yu, Hao Chen, Huangjing Lin, Xiangbo Wan, Xinjuan Fan, Pheng-Ann Hen
The multi-instance learning based on the most discriminative instances can be of great benefit for whole slide gastric image diagnosis.
no code implementations • 4 Oct 2020 • Kang Li, Lequan Yu, Shujun Wang, Pheng-Ann Heng
Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e. g., shape priors) learned from one modality (aka., assistant modality) to improve the segmentation performance on another modality (aka., target modality) to make up annotation scarcity.
no code implementations • ECCV 2020 • Shujun Wang, Lequan Yu, Caizi Li, Chi-Wing Fu, Pheng-Ann Heng
To this end, we present a new domain generalization framework that learns how to generalize across domains simultaneously from extrinsic relationship supervision and intrinsic self-supervision for images from multi-source domains.
Ranked #24 on
Domain Generalization
on PACS
no code implementations • 13 Jul 2020 • Kang Li, Shujun Wang, Lequan Yu, Pheng-Ann Heng
Medical image annotations are prohibitively time-consuming and expensive to obtain.
1 code implementation • 10 Oct 2019 • Haoran Dou, Xin Yang, Jikuan Qian, Wufeng Xue, Hao Qin, Xu Wang, Lequan Yu, Shujun Wang, Yi Xiong, Pheng-Ann Heng, Dong Ni
In this study, we propose a novel reinforcement learning (RL) framework to automatically localize fetal brain standard planes in 3D US.
no code implementations • 8 Oct 2019 • José Ignacio Orlando, Huazhu Fu, João Barbossa Breda, Karel van Keer, Deepti. R. Bathula, Andrés Diaz-Pinto, Ruogu Fang, Pheng-Ann Heng, Jeyoung Kim, Joonho Lee, Joonseok Lee, Xiaoxiao Li, Peng Liu, Shuai Lu, Balamurali Murugesan, Valery Naranjo, Sai Samarth R. Phaye, Sharath M. Shankaranarayana, Apoorva Sikka, Jaemin Son, Anton Van Den Hengel, Shujun Wang, Junyan Wu, Zifeng Wu, Guanghui Xu, Yongli Xu, Pengshuai Yin, Fei Li, Yanwu Xu, Xiulan Zhang, Hrvoje Bogunović
As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one.
7 code implementations • 16 Jul 2019 • Lequan Yu, Shujun Wang, Xiaomeng Li, Chi-Wing Fu, Pheng-Ann Heng
We design a novel uncertainty-aware scheme to enable the student model to gradually learn from the meaningful and reliable targets by exploiting the uncertainty information.
1 code implementation • 26 Jun 2019 • Shujun Wang, Lequan Yu, Kang Li, Xin Yang, Chi-Wing Fu, Pheng-Ann Heng
The cross-domain discrepancy (domain shift) hinders the generalization of deep neural networks to work on different domain datasets. In this work, we present an unsupervised domain adaptation framework, called Boundary and Entropy-driven Adversarial Learning (BEAL), to improve the OD and OC segmentation performance, especially on the ambiguous boundary regions.
no code implementations • 20 Feb 2019 • Shujun Wang, Lequan Yu, Xin Yang, Chi-Wing Fu, Pheng-Ann Heng
In this paper, we present a novel patchbased Output Space Adversarial Learning framework (pOSAL) to jointly and robustly segment the OD and OC from different fundus image datasets.
Ranked #2 on
Optic Disc Segmentation
on REFUGE