no code implementations • 24 Feb 2025 • ShiJie Lin, Boxiang Yun, Wei Shen, Qingli Li, Anqiang Yang, Yan Wang
Medical Hyperspectral Imaging (MHSI) offers potential for computational pathology and precision medicine.
no code implementations • 4 Dec 2024 • Qing Zhang, Hang Guo, Siyuan Yang, Qingli Li, Yan Wang
Given that multiple cell types exist across various organs, with subtle differences in cell size and shape, multi-organ, multi-class cell segmentation is particularly challenging.
no code implementations • 1 Dec 2024 • Xiaoxiang Han, Yiman Liu, Jiang Shang, Qingli Li, Jiangang Chen, Menghan Hu, Qi Zhang, Yuqi Zhang, Yan Wang
The first step is called reconstruction reflection.
no code implementations • 21 Oct 2024 • Ming Li, Wei Shen, Qingli Li, Yan Wang
The fundamental idea of label filling is to supervise the segmentation model by a subset of pixels with trustworthy labels, meanwhile filling labels of other pixels by mixed supervision.
no code implementations • 8 Oct 2024 • Siwei Xia, Xueqi Hu, Li Sun, Qingli Li
In StyleGAN, convolution kernels are shaped by both static parameters shared across images and dynamic modulation factors $w^+\in\mathcal{W}^+$ specific to each image.
no code implementations • 19 Aug 2024 • Yadong Lu, Shitian Zhao, Boxiang Yun, Dongsheng Jiang, Yin Li, Qingli Li, Yan Wang
Despite recent progress in enhancing the efficacy of Open-Domain Continual Learning (ODCL) in Vision-Language Models (VLM), failing to (1) correctly identify the Task-ID of a test image and (2) use only the category set corresponding to the Task-ID, while preserving the knowledge related to each domain, cannot address the two primary challenges of ODCL: forgetting old knowledge and maintaining zero-shot capabilities, as well as the confusions caused by category-relatedness between domains.
2 code implementations • 24 Jul 2024 • Xintian Mao, Jiansheng Wang, Xingran Xie, Qingli Li, Yan Wang
Due to the computational complexity of self-attention (SA), prevalent techniques for image deblurring often resort to either adopting localized SA or employing coarse-grained global SA methods, both of which exhibit drawbacks such as compromising global modeling or lacking fine-grained correlation.
Ranked #3 on
Image Deblurring
on GoPro
no code implementations • 11 Jul 2024 • Shaojie Guo, Haofei Song, Qingli Li, Yan Wang
Unlike existing dataset-free BISR methods that focus on obtaining a degradation kernel for the entire image, we are the first to explicitly design a spatially-variant degradation model for each pixel.
no code implementations • 26 Jun 2024 • Ling Zhang, Boxiang Yun, Xingran Xie, Qingli Li, Xinxing Li, Yan Wang
Experimental results on two colorectal cancer datasets show the superiority of our method, achieving 91. 49% in AUC for MSI classification.
4 code implementations • CVPR 2024 • Xintian Mao, Qingli Li, Yan Wang
Despite the recent progress in enhancing the efficacy of image deblurring, the limited decoding capability constrains the upper limit of State-Of-The-Art (SOTA) methods.
Ranked #1 on
Image Deblurring
on GoPro
no code implementations • 9 Jun 2024 • Shengjian Wu, Li Sun, Qingli Li
First, the matching relation between object queries and ground truth (GT) boxes in the teacher is employed to guide the student, so queries within the student are not only assigned labels based on their own predictions, but also refer to the matching results from the teacher.
no code implementations • 5 May 2024 • Haofei Song, Xintian Mao, Jing Yu, Qingli Li, Yan Wang
Based on this observation, we propose an Inter-Intra-slice Interpolation Network (I$^3$Net), which fully explores information from high in-plane resolution and compensates for low through-plane resolution.
no code implementations • 25 Oct 2023 • Yuejun Jiao, Song Qiu, Mingsong Chen, Dingding Han, Qingli Li, Yue Lu
Finally, the nodes and similarity adjacency matrices are fed into graph networks to extract more discriminative features for vehicle Re-ID.
no code implementations • 6 Sep 2023 • Ting Jin, Xingran Xie, Renjie Wan, Qingli Li, Yan Wang
Histology analysis of the tumor micro-environment integrated with genomic assays is the gold standard for most cancers in modern medicine.
no code implementations • 6 Jun 2023 • Yiman Liu, Qiming Huang, Xiaoxiang Han, Tongtong Liang, Zhifang Zhang, Lijun Chen, Jinfeng Wang, Angelos Stefanidis, Jionglong Su, Jiangang Chen, Qingli Li, Yuqi Zhang
In addition, data from 30 children patients (15 positives and 15 negatives) are collected for clinician testing and compared to our model test results (these 30 samples do not participate in model training).
2 code implementations • CVPR 2023 • Yunhao Bai, Duowen Chen, Qingli Li, Wei Shen, Yan Wang
In semi-supervised medical image segmentation, there exist empirical mismatch problems between labeled and unlabeled data distribution.
Image Segmentation
Semi-supervised Medical Image Segmentation
+1
no code implementations • 27 Feb 2023 • Yiman Liu, Xiaoxiang Han, Tongtong Liang, Bin Dong, Jiajun Yuan, Menghan Hu, Qiaohong Liu, Jiangang Chen, Qingli Li, Yuqi Zhang
The EDMAE encoder is composed of a teacher and a student encoder.
no code implementations • CVPR 2023 • Ming Li, Qingli Li, Yan Wang
The second key element is that we design class balanced adaptive thresholds via considering the empirical distribution of all training data in local clients, to encourage a balanced training process.
1 code implementation • CVPR 2023 • Duowen Chen, Yunhao Bai, Wei Shen, Qingli Li, Lequan Yu, Yan Wang
Our strategy encourages unlabeled images to learn organ semantics in relative locations from the labeled images (cross-branch) and enhances the learning ability for small organs (within-branch).
2 code implementations • 25 Nov 2022 • Siyuan Li, Li Sun, Qingli Li
The key idea is to fully exploit the cross-modal description ability in CLIP through a set of learnable text tokens for each ID and give them to the text encoder to form ambiguous descriptions.
Ranked #1 on
Person Re-Identification
on MSMT17
2 code implementations • 21 Sep 2022 • Jing Zhao, Shengjian Wu, Li Sun, Qingli Li
Without densely tiled anchor boxes or grid points in the image, sparse R-CNN achieves promising results through a set of object queries and proposal boxes updated in the cascaded training manner.
no code implementations • 19 Sep 2022 • Xingran Xie, Yan Wang, Qingli Li
More concretely, we propose to learn a set of linear coefficients that can be used to represent one band by the remaining bands via masking out these bands.
1 code implementation • 1 Aug 2022 • Xinyue Zhou, Mingyu Yin, Xinyuan Chen, Li Sun, Changxin Gao, Qingli Li
In this paper, we propose a cross attention based style distribution module that computes between the source semantic styles and target pose for pose transfer.
2 code implementations • CVPR 2022 • Xueqi Hu, Xinyue Zhou, Qiusheng Huang, Zhengyi Shi, Li Sun, Qingli Li
By constraining features from the same location to be closer than those from different ones, it implicitly ensures the result to take content from the source.
1 code implementation • CVPR 2022 • Xueqi Hu, Qiusheng Huang, Zhengyi Shi, Siyuan Li, Changxin Gao, Li Sun, Qingli Li
Existing GAN inversion methods fail to provide latent codes for reliable reconstruction and flexible editing simultaneously.
5 code implementations • 23 Nov 2021 • Xintian Mao, Yiming Liu, Fengze Liu, Qingli Li, Wei Shen, Yan Wang
Blur was naturally analyzed in the frequency domain, by estimating the latent sharp image and the blur kernel given a blurry image.
Ranked #5 on
Deblurring
on RealBlur-R (trained on GoPro)
1 code implementation • ICCV 2021 • Qiusheng Huang, Zhilin Zheng, Xueqi Hu, Li Sun, Qingli Li
The two types of synthesis, either label- or reference-based, have substantial differences.
no code implementations • 11 Oct 2021 • Qiusheng Huang, Xueqi Hu, Li Sun, Qingli Li
Image-to-image (I2I) translation is usually carried out among discrete domains.
no code implementations • 27 Jul 2021 • Hang Liu, Menghan Hu, Yuzhen Chen, Qingli Li, Guangtao Zhai, Simon X. Yang, Xiao-Ping Zhang, Xiaokang Yang
This work demonstrates that it is practicable for the blind people to feel the world through the brush in their hands.
1 code implementation • 5 Mar 2021 • Boxiang Yun, Yan Wang, Jieneng Chen, Huiyu Wang, Wei Shen, Qingli Li
Hyperspectral imaging (HSI) unlocks the huge potential to a wide variety of applications relied on high-precision pathology image segmentation, such as computational pathology and precision medicine.
2 code implementations • CVPR 2021 • Mingyu Yin, Li Sun, Qingli Li
View synthesis is usually done by an autoencoder, in which the encoder maps a source view image into a latent content code, and the decoder transforms it into a target view image according to the condition.
no code implementations • 20 Oct 2020 • Chang Yao, Jingyu Tang, Menghan Hu, Yue Wu, Wenyi Guo, Qingli Li, Xiao-Ping Zhang
Sturge-Weber syndrome (SWS) is a vascular malformation disease, and it may cause blindness if the patient's condition is severe.
no code implementations • 20 Oct 2020 • Yunlu Wang, Cheng Yang, Menghan Hu, Jian Zhang, Qingli Li, Guangtao Zhai, Xiao-Ping Zhang
This paper presents an unobtrusive solution that can automatically identify deep breath when a person is walking past the global depth camera.
no code implementations • 9 Oct 2020 • Yuzhen Chen, Menghan Hu, Chunjun Hua, Guangtao Zhai, Jian Zhang, Qingli Li, Simon X. Yang
Aimed at solving the problem that we don't know which service stage of the mask belongs to, we propose a detection system based on the mobile phone.
1 code implementation • ECCV 2020 • Mingyu Yin, Li Sun, Qingli Li
Novel view synthesis often needs the paired data from both the source and target views.
no code implementations • 12 Feb 2020 • Yunlu Wang, Menghan Hu, Qingli Li, Xiao-Ping Zhang, Guangtao Zhai, Nan Yao
During the epidemic prevention and control period, our study can be helpful in prognosis, diagnosis and screening for the patients infected with COVID-19 (the novel coronavirus) based on breathing characteristics.
no code implementations • 29 Oct 2019 • Ziye Zhang, Li Sun, Zhilin Zheng, Qingli Li
Depending on whether the label is related with the spatial structure, the output $z_s$ from the condition mapping network is used either as a style code or a spatial structure code.