1 code implementation • 20 Sep 2023 • Ziyang Zheng, Jiewen Yang, Xinpeng Ding, Xiaowei Xu, Xiaomeng Li
Additionally, a Multi-view Local-based Fusion Module (MLFM) is designed to extract correlations of cardiac structures from different views.
1 code implementation • 20 Sep 2023 • Jiewen Yang, Xinpeng Ding, Ziyang Zheng, Xiaowei Xu, Xiaomeng Li
This paper studies the unsupervised domain adaption (UDA) for echocardiogram video segmentation, where the goal is to generalize the model trained on the source domain to other unlabelled target domains.
no code implementations • 11 Sep 2023 • Xinpeng Ding, Jianhua Han, Hang Xu, Wei zhang, Xiaomeng Li
For the first time, we leverage singular multimodal large language models (MLLMs) to consolidate multiple autonomous driving tasks from videos, i. e., the Risk Object Localization and Intention and Suggestion Prediction (ROLISP) task.
1 code implementation • 23 Aug 2023 • Hualiang Wang, Yi Li, Huifeng Yao, Xiaomeng Li
Subsequently, we introduce two loss functions: the image-text binary-opposite loss and the text semantic-opposite loss, which we use to teach CLIPN to associate images with no prompts, thereby enabling it to identify unknown samples.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 15 Aug 2023 • Zheang Huai, Xinpeng Ding, Yi Li, Xiaomeng Li
To this end, we propose a context-aware pseudo-label refinement method for SF-UDA.
1 code implementation • 14 Aug 2023 • Weihang Dai, Xiaomeng Li, Taihui Yu, Di Zhao, Jun Shen, Kwang-Ting Cheng
Furthermore, we ensure complementary information is learned by deep and radiomic features by designing a novel feature de-correlation loss.
1 code implementation • 1 Aug 2023 • Lehan Wang, Weihang Dai, Mei Jin, Chubin Ou, Xiaomeng Li
Our framework enhances the OCT model during training by utilizing unpaired fundus images and does not require the use of fundus images during testing, which greatly improves the practicality and efficiency of our method for clinical use.
1 code implementation • 27 Jul 2023 • Marawan Elbatel, Hualiang Wang, Robert Martí, Huazhu Fu, Xiaomeng Li
Existing federated methods under highly imbalanced datasets primarily focus on optimizing a global model without incorporating the intra-class variations that can arise in medical imaging due to different populations, findings, and scanners.
1 code implementation • 22 Jul 2023 • Yi Qin, Xiaomeng Li
Specifically, FDG uses the diffusion model's multi-scale semantic features to guide the generation of the deformation field.
1 code implementation • 22 Jul 2023 • Qixiang Zhang, Yi Li, Cheng Xue, Xiaomeng Li
In this paper, we make a first attempt to explore a deep learning method for unsupervised gland segmentation, where no manual annotations are required.
1 code implementation • 22 Jul 2023 • Haonan Wang, Xiaomeng Li
Aiming to solve this issue, we present a novel Dual-debiased Heterogeneous Co-training (DHC) framework for semi-supervised 3D medical image segmentation.
no code implementations • 27 May 2023 • Marawan Elbatel, Robert Martí, Xiaomeng Li
Through these modules, FoPro-KD achieves significant improvements in performance on long-tailed medical image classification benchmarks, demonstrating the potential of leveraging the learned frequency patterns from pre-trained models to enhance transfer learning and compression of large pre-trained models for feasible deployment.
1 code implementation • 25 May 2023 • Xinyue Xu, Yuhan Hsi, Haonan Wang, Xiaomeng Li
However, manually configuring a generic augmentation combination and parameters for different datasets is non-trivial due to inconsistent acquisition approaches and data distributions.
2 code implementations • 15 Apr 2023 • Huimin Wu, Xiaomeng Li, Yiqun Lin, Kwang-Ting Cheng
This study investigates barely-supervised medical image segmentation where only few labeled data, i. e., single-digit cases are available.
2 code implementations • 12 Apr 2023 • Yi Li, Hualiang Wang, Yiqun Duan, Xiaomeng Li
Contrastive Language-Image Pre-training (CLIP) is a powerful multimodal large vision model that has demonstrated significant benefits for downstream tasks, including many zero-shot learning and text-guided vision tasks.
Ranked #1 on
Open Vocabulary Semantic Segmentation
on COCO-Stuff-171
(mIoU metric)
Interactive Segmentation
Open Vocabulary Semantic Segmentation
+2
no code implementations • 22 Mar 2023 • Xunguang Wang, Jiawang Bai, Xinyue Xu, Xiaomeng Li
Deep hashing has been extensively applied to massive image retrieval due to its efficiency and effectiveness.
1 code implementation • 13 Mar 2023 • Shuhan LI, Dong Zhang, Xiaomeng Li, Chubin Ou, Lin An, Yanwu Xu, Kwang-Ting Cheng
In this paper, we propose a novel framework, TransPro, that translates 3D Optical Coherence Tomography (OCT) images into exclusive 3D OCTA images using an image translation pattern.
1 code implementation • 12 Mar 2023 • Yiqun Lin, Zhongjin Luo, Wei Zhao, Xiaomeng Li
In this paper, we formulate the CT volume as a continuous intensity field and develop a novel DIF-Net to perform high-quality CBCT reconstruction from extremely sparse (fewer than 10) projection views at an ultrafast speed.
1 code implementation • 15 Feb 2023 • Weihang Dai, Xiaomeng Li, Kwang-Ting Cheng
In this work, we propose a novel approach to semi-supervised regression, namely Uncertainty-Consistent Variational Model Ensembling (UCVME), which improves training by generating high-quality pseudo-labels and uncertainty estimates for heteroscedastic regression.
no code implementations • 3 Nov 2022 • An Zeng, Chunbiao Wu, Meiping Huang, Jian Zhuang, Shanshan Bi, Dan Pan, Najeeb Ullah, Kaleem Nawaz Khan, Tianchen Wang, Yiyu Shi, Xiaomeng Li, Guisen Lin, Xiaowei Xu
In this paper, we propose a large-scale dataset for coronary artery segmentation on CTA images.
1 code implementation • 20 Oct 2022 • Weihang Dai, Xiaomeng Li, Xinpeng Ding, Kwang-Ting Cheng
We also introduce teacher-student distillation to distill the information from LV segmentation masks into an end-to-end LVEF regression model that only requires video inputs.
1 code implementation • 4 Oct 2022 • Xiaomeng Li, Hongyu Ren, Huifeng Yao, Ziwei Liu
In this paper, we propose TripleE, and the main idea is to encourage the network to focus on training on subsets (learning with replay) and enlarge the data space in learning on subsets.
no code implementations • 30 Sep 2022 • Wanqin Ma, Huifeng Yao, Yiqun Lin, Jiarong Guo, Xiaomeng Li
Our main goal is to improve the quality of pseudo labels under extreme MRI Analysis with various domains.
no code implementations • 27 Sep 2022 • Yi Li, Huifeng Yao, Hualiang Wang, Xiaomeng Li
We call the proposed framework as FreeSeg, where the mask is freely available from raw feature map of pretraining model.
1 code implementation • 15 Sep 2022 • Yi Li, Hualiang Wang, Yiqun Duan, Hang Xu, Xiaomeng Li
For this problem, we propose the Explainable Contrastive Language-Image Pre-training (ECLIP), which corrects the explainability via the Masked Max Pooling.
no code implementations • 30 Jul 2022 • Xinpeng Ding, Jingweng Yang, Xiaowei Hu, Xiaomeng Li
We further design a new evaluation metric to evaluate the temporal stability of the video shadow detection results.
1 code implementation • 3 Jul 2022 • Shuhan LI, Xiaomeng Li, Xiaowei Xu, Kwang-Ting Cheng
Specifically, SCAN follows a dual-branch framework, where the first branch is to learn class-wise features to distinguish different skin diseases, and the second one aims to learn features which can effectively partition each class into several groups so as to preserve the sub-clustered structure within each class.
1 code implementation • 14 Jun 2022 • Yi Li, Yiduo Yu, Yiwen Zou, Tianqi Xiang, Xiaomeng Li
Existing weakly-supervised semantic segmentation methods in computer vision achieve degenerative results for gland segmentation, since the characteristics and problems of glandular datasets are different from general object datasets.
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
1 code implementation • 19 May 2022 • Xinpeng Ding, Ziwei Liu, Xiaomeng Li
Our key insight is to distill knowledge from publicly available models trained on large generic datasets4 to facilitate the self-supervised learning of surgical videos.
1 code implementation • 7 May 2022 • Yiqun Lin, Huifeng Yao, Zezhong Li, Guoyan Zheng, Xiaomeng Li
Our framework leverages label distribution to encourage the network to put more effort into learning cartilage parts.
1 code implementation • 18 Apr 2022 • Xunguang Wang, Yiqun Lin, Xiaomeng Li
On the one hand, CgAT generates the worst adversarial examples as augmented data by maximizing the Hamming distance between the hash codes of the adversarial examples and the center codes.
no code implementations • 13 Apr 2022 • Chu Han, Xipeng Pan, Lixu Yan, Huan Lin, Bingbing Li, Su Yao, Shanshan Lv, Zhenwei Shi, Jinhai Mai, Jiatai Lin, Bingchao Zhao, Zeyan Xu, Zhizhen Wang, Yumeng Wang, Yuan Zhang, Huihui Wang, Chao Zhu, Chunhui Lin, Lijian Mao, Min Wu, Luwen Duan, Jingsong Zhu, Dong Hu, Zijie Fang, Yang Chen, Yongbing Zhang, Yi Li, Yiwen Zou, Yiduo Yu, Xiaomeng Li, Haiming Li, Yanfen Cui, Guoqiang Han, Yan Xu, Jun Xu, Huihua Yang, Chunming Li, Zhenbing Liu, Cheng Lu, Xin Chen, Changhong Liang, Qingling Zhang, Zaiyi Liu
According to the technical reports of the top-tier teams, CAM is still the most popular approach in WSSS.
Data Augmentation
Weakly supervised Semantic Segmentation
+1
1 code implementation • CVPR 2022 • Xiaoxiao Liang, Yiqun Lin, Huazhu Fu, Lei Zhu, Xiaomeng Li
In this paper, we present a Random Sampling Consensus Federated learning, namely RSCFed, by considering the uneven reliability among models from fully-labeled clients, fully-unlabeled clients or partially labeled clients.
1 code implementation • 16 Feb 2022 • Xinpeng Ding, Xinjian Yan, Zixun Wang, Wei Zhao, Jian Zhuang, Xiaowei Xu, Xiaomeng Li
Our study uncovers unique insights of surgical phase recognition with timestamp supervisions: 1) timestamp annotation can reduce 74% annotation time compared with the full annotation, and surgeons tend to annotate those timestamps near the middle of phases; 2) extensive experiments demonstrate that our method can achieve competitive results compared with full supervision methods, while reducing manual annotation cost; 3) less is more in surgical phase recognition, i. e., less but discriminative pseudo labels outperform full but containing ambiguous frames; 4) the proposed UATD can be used as a plug and play method to clean ambiguous labels near boundaries between phases, and improve the performance of the current surgical phase recognition methods.
1 code implementation • 21 Jan 2022 • Huifeng Yao, Xiaowei Hu, Xiaomeng Li
With these augmentations as perturbations, we feed the input to a confidence-aware cross pseudo supervision network to measure the variance of pseudo labels and regularize the network to learn with more confident pseudo labels.
1 code implementation • 14 Dec 2021 • Yi Li, Yiqun Duan, Zhanghui Kuang, Yimin Chen, Wayne Zhang, Xiaomeng Li
So we try to improve WSSS in the aspect of noise mitigation.
Ranked #13 on
Weakly-Supervised Semantic Segmentation
on COCO 2014 val
Saliency Detection
Weakly supervised Semantic Segmentation
+1
1 code implementation • 6 Dec 2021 • Jiacheng Wang, Xiaomeng Li, Yiming Han, Jing Qin, Liansheng Wang, Zhou Qichao
The SIS is proposed to operate on the image set to rebuild a region set under the guidance of structural information.
1 code implementation • 22 Nov 2021 • Xinpeng Ding, Xiaomeng Li
Automatic surgical phase recognition plays a vital role in robot-assisted surgeries.
1 code implementation • 22 Nov 2021 • Huimin Wu, Xiaomeng Li, Kwang-Ting Cheng
A stage-adaptive contrastive learning method is proposed, containing a boundary-aware contrastive loss that takes advantage of the labeled images in the first stage, as well as a prototype-aware contrastive loss to optimize both labeled and pseudo labeled images in the second stage.
no code implementations • 12 Oct 2021 • Huifeng Yao, Ziyu Guo, Yatao Zhang, Xiaomeng Li
This paper proposes a landmark detection network for detecting sutures in endoscopic pictures, which solves the problem of a variable number of suture points in the images.
no code implementations • 28 Sep 2021 • Lequan Yu, Zhicheng Zhang, Xiaomeng Li, Hongyi Ren, Wei Zhao, Lei Xing
We then design a novel FBP reconstruction loss to encourage the network to generate more perfect completion results and a residual-learning-based image refinement module to reduce the secondary artifacts in the reconstructed CT images.
no code implementations • ICCV 2021 • Xinpeng Ding, Nannan Wang, Shiwei Zhang, De Cheng, Xiaomeng Li, Ziyuan Huang, Mingqian Tang, Xinbo Gao
The contrastive objective aims to learn effective representations by contrastive learning, while the caption objective can train a powerful video encoder supervised by texts.
no code implementations • 5 Apr 2021 • Cheng Xue, Qiao Deng, Xiaomeng Li, Qi Dou, Pheng Ann Heng
To deal with the high inter-rater variability, the study of imperfect label has great significance in medical image segmentation tasks.
no code implementations • 5 Apr 2021 • Cheng Xue, Lei Zhu, Huazhu Fu, Xiaowei Hu, Xiaomeng Li, Hai Zhang, Pheng Ann Heng
The BD modules learn additional breast lesion boundary map to enhance the boundary quality of a segmentation result refinement.
no code implementations • 16 Sep 2020 • Lequan Yu, Zhicheng Zhang, Xiaomeng Li, Lei Xing
Computed tomography (CT) has been widely used for medical diagnosis, assessment, and therapy planning and guidance.
1 code implementation • 21 Jul 2020 • Xiaomeng Li, Mengyu Jia, Md Tauhidul Islam, Lequan Yu, Lei Xing
The automatic diagnosis of various retinal diseases from fundus images is important to support clinical decision-making.
no code implementations • 29 May 2020 • Siyu Huang, Wensha Gou, Hongbo Cai, Xiaomeng Li, Qinghua Chen
In addition, we apply the network to reflect the purity of the trade relations among countries.
no code implementations • 26 May 2020 • Qinghua Chen, Yan Wang, Mengmeng Wang, Xiaomeng Li
In addition, we collected Chinese literature corpora for different historical periods from the Tang Dynasty to the present, and we dismantled the Chinese written language into three kinds of basic particles: characters, strokes and constructive parts.
no code implementations • 5 May 2020 • Huazhu Fu, Fei Li, Xu sun, Xingxing Cao, Jingan Liao, Jose Ignacio Orlando, Xing Tao, Yuexiang Li, Shihao Zhang, Mingkui Tan, Chenglang Yuan, Cheng Bian, Ruitao Xie, Jiongcheng Li, Xiaomeng Li, Jing Wang, Le Geng, Panming Li, Huaying Hao, Jiang Liu, Yan Kong, Yongyong Ren, Hrvoje Bogunovic, Xiulan Zhang, Yanwu Xu
To address this, we organized the Angle closure Glaucoma Evaluation challenge (AGE), held in conjunction with MICCAI 2019.
1 code implementation • 4 Nov 2019 • Xiaomeng Li, Xiao-Wei Hu, Lequan Yu, Lei Zhu, Chi-Wing Fu, Pheng-Ann Heng
In this paper, we present a novel cross-disease attention network (CANet) to jointly grade DR and DME by exploring the internal relationship between the diseases with only image-level supervision.
7 code implementations • 16 Jul 2019 • Lequan Yu, Shujun Wang, Xiaomeng Li, Chi-Wing Fu, Pheng-Ann Heng
We design a novel uncertainty-aware scheme to enable the student model to gradually learn from the meaningful and reliable targets by exploiting the uncertainty information.
no code implementations • 6 Jul 2019 • Xiaomeng Li, Lequan Yu, Chi-Wing Fu, Meng Fang, Pheng-Ann Heng
However, the importance of feature embedding, i. e., exploring the relationship among training samples, is neglected.
no code implementations • 30 Jun 2019 • Xiaomeng Li, Lequan Yu, Yueming Jin, Chi-Wing Fu, Lei Xing, Pheng-Ann Heng
Rare diseases have extremely low-data regimes, unlike common diseases with large amount of available labeled data.
no code implementations • 28 Feb 2019 • Xiaomeng Li, Lequan Yu, Hao Chen, Chi-Wing Fu, Lei Xing, Pheng-Ann Heng
In this paper, we present a novel semi-supervised method for medical image segmentation, where the network is optimized by the weighted combination of a common supervised loss for labeled inputs only and a regularization loss for both labeled and unlabeled data.
6 code implementations • 13 Jan 2019 • Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, Fabian Lohöfer, Julian Walter Holch, Wieland Sommer, Felix Hofmann, Alexandre Hostettler, Naama Lev-Cohain, Michal Drozdzal, Michal Marianne Amitai, Refael Vivantik, Jacob Sosna, Ivan Ezhov, Anjany Sekuboyina, Fernando Navarro, Florian Kofler, Johannes C. Paetzold, Suprosanna Shit, Xiaobin Hu, Jana Lipková, Markus Rempfler, Marie Piraud, Jan Kirschke, Benedikt Wiestler, Zhiheng Zhang, Christian Hülsemeyer, Marcel Beetz, Florian Ettlinger, Michela Antonelli, Woong Bae, Míriam Bellver, Lei Bi, Hao Chen, Grzegorz Chlebus, Erik B. Dam, Qi Dou, Chi-Wing Fu, Bogdan Georgescu, Xavier Giró-i-Nieto, Felix Gruen, Xu Han, Pheng-Ann Heng, Jürgen Hesser, Jan Hendrik Moltz, Christian Igel, Fabian Isensee, Paul Jäger, Fucang Jia, Krishna Chaitanya Kaluva, Mahendra Khened, Ildoo Kim, Jae-Hun Kim, Sungwoong Kim, Simon Kohl, Tomasz Konopczynski, Avinash Kori, Ganapathy Krishnamurthi, Fan Li, Hongchao Li, Junbo Li, Xiaomeng Li, John Lowengrub, Jun Ma, Klaus Maier-Hein, Kevis-Kokitsi Maninis, Hans Meine, Dorit Merhof, Akshay Pai, Mathias Perslev, Jens Petersen, Jordi Pont-Tuset, Jin Qi, Xiaojuan Qi, Oliver Rippel, Karsten Roth, Ignacio Sarasua, Andrea Schenk, Zengming Shen, Jordi Torres, Christian Wachinger, Chunliang Wang, Leon Weninger, Jianrong Wu, Daguang Xu, Xiaoping Yang, Simon Chun-Ho Yu, Yading Yuan, Miao Yu, Liping Zhang, Jorge Cardoso, Spyridon Bakas, Rickmer Braren, Volker Heinemann, Christopher Pal, An Tang, Samuel Kadoury, Luc Soler, Bram van Ginneken, Hayit Greenspan, Leo Joskowicz, Bjoern Menze
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018.
no code implementations • 12 Aug 2018 • Xiaomeng Li, Lequan Yu, Hao Chen, Chi-Wing Fu, Pheng-Ann Heng
In this paper, we present a novel semi-supervised method for skin lesion segmentation, where the network is optimized by the weighted combination of a common supervised loss for labeled inputs only and a regularization loss for both labeled and unlabeled data.
1 code implementation • 8 Jul 2018 • Xiaomeng Li, Lequan Yu, Chi-Wing Fu, Pheng-Ann Heng
Our best model achieves 77. 23\%(JA) on the test dataset, outperforming the state-of-the-art challenging methods and further demonstrating the effectiveness of our proposed deeply supervised rotation equivariant segmentation network.
1 code implementation • 21 Sep 2017 • Xiaomeng Li, Hao Chen, Xiaojuan Qi, Qi Dou, Chi-Wing Fu, Pheng Ann Heng
Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model.
Ranked #1 on
Liver Segmentation
on LiTS2017
(Dice metric)
Automatic Liver And Tumor Segmentation
Image Segmentation
+3