1 code implementation • 10 Sep 2021 • Wenbin Li, Ziyi, Wang, Xuesong Yang, Chuanqi Dong, Pinzhuo Tian, Tiexin Qin, Jing Huo, Yinghuan Shi, Lei Wang, Yang Gao, Jiebo Luo
Furthermore, based on LibFewShot, we provide comprehensive evaluations on multiple benchmarks with various backbone architectures to evaluate common pitfalls and effects of different training tricks.
1 code implementation • CVPR 2023 • Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, where the prediction of a weakly perturbed image serves as supervision for its strongly perturbed version.
Semi-supervised Change Detection Semi-supervised Medical Image Segmentation +1
1 code implementation • CVPR 2022 • Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao
In this work, we first construct a strong baseline of self-training (namely ST) for semi-supervised semantic segmentation via injecting strong data augmentations (SDA) on unlabeled images to alleviate overfitting noisy labels as well as decouple similar predictions between the teacher and student.
1 code implementation • NeurIPS 2023 • Lihe Yang, Xiaogang Xu, Bingyi Kang, Yinghuan Shi, Hengshuang Zhao
Then, we investigate the role of synthetic images by joint training with real images, or pre-training for real images.
1 code implementation • CVPR 2022 • Ziqi Zhou, Lei Qi, Xin Yang, Dong Ni, Yinghuan Shi
For medical image segmentation, imagine if a model was only trained using MR images in source domain, how about its performance to directly segment CT images in target domain?
3 code implementations • 27 Mar 2022 • Yue Duan, Zhen Zhao, Lei Qi, Lei Wang, Luping Zhou, Yinghuan Shi, Yang Gao
The core issue in semi-supervised learning (SSL) lies in how to effectively leverage unlabeled data, whereas most existing methods tend to put a great emphasis on the utilization of high-confidence samples yet seldom fully explore the usage of low-confidence samples.
3 code implementations • 9 Aug 2022 • Yue Duan, Lei Qi, Lei Wang, Luping Zhou, Yinghuan Shi
In this work, we propose Reciprocal Distribution Alignment (RDA) to address semi-supervised learning (SSL), which is a hyperparameter-free framework that is independent of confidence threshold and works with both the matched (conventionally) and the mismatched class distributions.
2 code implementations • ICCV 2023 • Yue Duan, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
Semi-supervised learning (SSL) tackles the label missing problem by enabling the effective usage of unlabeled data.
1 code implementation • ICCV 2021 • Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao
Our method aims to alleviate this problem and enhance the feature embedding on latent novel classes.
Ranked #41 on Few-Shot Semantic Segmentation on PASCAL-5i (5-Shot)
1 code implementation • CVPR 2023 • Heng Cai, Shumeng Li, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
Subsequently, by introducing unlabeled volumes, we propose a dual-network paradigm named Dense-Sparse Co-training (DeSCO) that exploits dense pseudo labels in early stage and sparse labels in later stage and meanwhile forces consistent output of two networks.
1 code implementation • ICCV 2023 • Lihe Yang, Zhen Zhao, Lei Qi, Yu Qiao, Yinghuan Shi, Hengshuang Zhao
To mitigate potentially incorrect pseudo labels, recent frameworks mostly set a fixed confidence threshold to discard uncertain samples.
1 code implementation • ICCV 2023 • Zekun Li, Lei Qi, Yinghuan Shi, Yang Gao
Semi-supervised learning (SSL) aims to leverage massive unlabeled data when labels are expensive to obtain.
1 code implementation • ICCV 2021 • Jing Huo, Shiyin Jin, Wenbin Li, Jing Wu, Yu-Kun Lai, Yinghuan Shi, Yang Gao
In this paper, we make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
1 code implementation • CVPR 2023 • Jintao Guo, Na Wang, Lei Qi, Yinghuan Shi
However, the local operation of the convolution kernel makes the model focus too much on local representations (e. g., texture), which inherently causes the model more prone to overfit to the source domains and hampers its generalization ability.
1 code implementation • ICCV 2023 • Jintao Guo, Lei Qi, Yinghuan Shi
Deep Neural Networks have exhibited considerable success in various visual tasks.
1 code implementation • 17 Oct 2021 • Yinghuan Shi, Jian Zhang, Tong Ling, Jiwen Lu, Yefeng Zheng, Qian Yu, Lei Qi, Yang Gao
In semi-supervised medical image segmentation, most previous works draw on the common assumption that higher entropy means higher uncertainty.
1 code implementation • ICCV 2023 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
To deal with the domain shift between training and test samples, current methods have primarily focused on learning generalizable features during training and ignore the specificity of unseen samples that are also critical during the test.
1 code implementation • 8 Aug 2022 • Ziqi Zhou, Lei Qi, Yinghuan Shi
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images, which validates our method could achieve the state-of-the-art performance.
1 code implementation • 27 Mar 2020 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Semantic segmentation in a supervised learning manner has achieved significant progress in recent years.
1 code implementation • 13 Apr 2020 • Tiexin Qin, Wenbin Li, Yinghuan Shi, Yang Gao
Importantly, we highlight the value and importance of the distribution diversity in the augmentation-based pretext few-shot tasks, which can effectively alleviate the overfitting problem and make the few-shot model learn more robust feature representations.
Data Augmentation Unsupervised Few-Shot Image Classification +1
1 code implementation • 27 Apr 2018 • Jinquan Sun, Yinghuan Shi, Yang Gao, Lei Wang, Luping Zhou, Wanqi Yang, Dinggang Shen
In this paper, we present a novel method for interactive medical image segmentation with the following merits.
1 code implementation • 30 Jul 2023 • Heng Cai, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods.
1 code implementation • ICCV 2023 • Xiran Wang, Jian Zhang, Lei Qi, Yinghuan Shi
Domain generalization (DG) is proposed to deal with the issue of domain shift, which occurs when statistical differences exist between source and target domains.
1 code implementation • ICCV 2023 • Guan Gui, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
Sample adaptive augmentation (SAA) is proposed for this stated purpose and consists of two modules: 1) sample selection module; 2) sample augmentation module.
1 code implementation • 24 Jul 2021 • Qian Yu, Lei Qi, Luping Zhou, Lei Wang, Yilong Yin, Yinghuan Shi, Wuzhang Wang, Yang Gao
Together, the above two schemes give rise to a novel double-branch encoder segmentation framework for medical image segmentation, namely Crosslink-Net.
1 code implementation • 23 Dec 2021 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Beyond the training stage, overfitting could also cause unstable prediction in the test stage.
1 code implementation • 18 Oct 2019 • Yinghuan Shi, Tiexin Qin, Yong liu, Jiwen Lu, Yang Gao, Dinggang Shen
By introducing an unified optimization goal, DeepAugNet intends to combine the data augmentation and the deep model training in an end-to-end training manner which is realized by simultaneously training a hybrid architecture of dueling deep Q-learning algorithm and a surrogate deep model.
1 code implementation • 6 Apr 2020 • Feng Shi, Jun Wang, Jun Shi, Ziyan Wu, Qian Wang, Zhenyu Tang, Kelei He, Yinghuan Shi, Dinggang Shen
In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up.
1 code implementation • IEEE Transactions on Medical Imaging 2022 • Shumeng Li, Heng Cai; Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
In this paper, by introducing an extremely sparse annotation way of labeling only one slice per 3D image, we investigate a novel barely-supervised segmentation setting with only a few sparsely-labeled images along with a large amount of unlabeled images.
1 code implementation • 30 Nov 2021 • Lei Qi, Jiaqi Liu, Lei Wang, Yinghuan Shi, Xin Geng
A significance of our work lies in that it shows the potential of unsupervised domain generalization for person ReID and sets a strong baseline for the further research on this topic.
1 code implementation • 19 Dec 2023 • Yue Duan, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
While semi-supervised learning (SSL) has yielded promising results, the more realistic SSL scenario remains to be explored, in which the unlabeled data exhibits extremely high recognition difficulty, e. g., fine-grained visual classification in the context of SSL (SS-FGVC).
1 code implementation • 7 Dec 2021 • Jintao Guo, Lei Qi, Yinghuan Shi, Yang Gao
Particularly, the proposed method can generate a variety of data variants to better deal with the overfitting issue.
1 code implementation • 17 Mar 2024 • Shumeng Li, Lei Qi, Qian Yu, Jing Huo, Yinghuan Shi, Yang Gao
Segment Anything Model (SAM) fine-tuning has shown remarkable performance in medical image segmentation in a fully supervised manner, but requires precise annotations.
1 code implementation • 18 Mar 2024 • Jintao Guo, Lei Qi, Yinghuan Shi, Yang Gao
In this paper, we study the impact of prior CNN-based augmentation methods on token-based models, revealing their performance is suboptimal due to the lack of incentivizing the model to learn holistic shape information.
1 code implementation • 13 Apr 2024 • Qinghe Ma, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
To fully utilize the information within the intermediate domain, we propose a symmetric Guidance training strategy (SymGD), which additionally offers direct guidance to unlabeled data by merging pseudo labels from intermediate samples.
1 code implementation • 15 May 2018 • Wenbin Li, Yanfang Liu, Jing Huo, Yinghuan Shi, Yang Gao, Lei Wang, Jiebo Luo
Furthermore, in a progressively and nonlinearly learning way, MLOML has a stronger learning ability than traditional online metric learning in the case of limited available training data.
1 code implementation • 28 Dec 2023 • Taicai Chen, Yue Duan, Dong Li, Lei Qi, Yinghuan Shi, Yang Gao
Based on this technique, we assign appropriate training weights to unlabeled data to enhance the construction of a discriminative latent space.
1 code implementation • 11 Jan 2024 • Na Wang, Lei Qi, Jintao Guo, Yinghuan Shi, Yang Gao
2) From the feature perspective, the simple Tail Interaction module implicitly enhances potential correlations among all samples from all source domains, facilitating the acquisition of domain-invariant representations across multiple domains for the model.
no code implementations • 27 Apr 2018 • Qian Yu, Yinghuan Shi, Jinquan Sun, Yang Gao, Yakang Dai, Jianbing Zhu
Due to the irregular motion, similar appearance and diverse shape, accurate segmentation of kidney tumor in CT images is a difficult and challenging task.
no code implementations • 11 Apr 2018 • Lei Qi, Jing Huo, Lei Wang, Yinghuan Shi, Yang Gao
Lastly, considering person retrieval is a special image retrieval task, we propose a novel ranking loss to optimize the whole network.
no code implementations • 18 Mar 2018 • Juanying Xie, Qi Hou, Yinghuan Shi, Lv Peng, Liping Jing, Fuzhen Zhuang, Junping Zhang, Xiaoyang Tang, Shengquan Xu
We delete those species with only one living environment image from data set, then partition the rest images from living environment into two subsets, one used as test subset, the other as training subset respectively combined with all standard pattern butterfly images or the standard pattern butterfly images with the same species of the images from living environment.
no code implementations • 9 Mar 2017 • Jing Huo, Wenbin Li, Yinghuan Shi, Yang Gao, Hujun Yin
In this paper, a new caricature dataset is built, with the objective to facilitate research in caricature recognition.
no code implementations • 29 Sep 2016 • Wenbin Li, Yang Gao, Lei Wang, Luping Zhou, Jing Huo, Yinghuan Shi
To achieve a low computational cost when performing online metric learning for large-scale data, we present a one-pass closed-form solution namely OPML in this paper.
no code implementations • CVPR 2013 • Yinghuan Shi, Shu Liao, Yaozong Gao, Daoqiang Zhang, Yang Gao, Dinggang Shen
Specifically, to segment the prostate in the current treatment image, the physician first takes a few seconds to manually specify the first and last slices of the prostate in the image space.
no code implementations • CVPR 2014 • Yinghuan Shi, Heung-Il Suk, Yang Gao, Dinggang Shen
Therefore, it is natural to hypothesize that the low-level features extracted from neuroimaging data are related to each other in some ways.
no code implementations • CVPR 2017 • Luping Zhou, Lei Wang, Jianjia Zhang, Yinghuan Shi, Yang Gao
The proposed method has been tested on multiple SPD-based visual representation data sets used in the literature, and the results demonstrate its interesting properties and attractive performance.
no code implementations • ICCV 2019 • Lei Qi, Lei Wang, Jing Huo, Luping Zhou, Yinghuan Shi, Yang Gao
For the first issue, we highlight the presence of camera-level sub-domains as a unique characteristic of person Re-ID, and develop camera-aware domain adaptation to reduce the discrepancy not only between source and target domains but also across these sub-domains.
Ranked #19 on Unsupervised Domain Adaptation on Market to Duke
no code implementations • 2 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Xin Geng, Yang Gao
To achieve the camera alignment, we develop a Multi-Camera Adversarial Learning (MCAL) to map images of different cameras into a shared subspace.
no code implementations • 14 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Yang Gao
Moreover, in the training process, we adopt the joint learning scheme to simultaneously train each branch by the independent loss function, which can enhance the generalization ability of each branch.
no code implementations • 15 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Yang Gao
In this paper, we focus on the semi-supervised person re-identification (Re-ID) case, which only has the intra-camera (within-camera) labels but not inter-camera (cross-camera) labels.
no code implementations • 23 Nov 2019 • Pinzhuo Tian, Zhangkai Wu, Lei Qi, Lei Wang, Yinghuan Shi, Yang Gao
To address the annotation scarcity issue in some cases of semantic segmentation, there have been a few attempts to develop the segmentation model in the few-shot learning paradigm.
no code implementations • 1 Feb 2020 • Wenbin Li, Lei Wang, Jing Huo, Yinghuan Shi, Yang Gao, Jiebo Luo
Given the natural asymmetric relation between a query image and a support class, we argue that an asymmetric measure is more suitable for metric-based few-shot learning.
no code implementations • 22 Feb 2020 • Tiexin Qin, Ziyuan Wang, Kelei He, Yinghuan Shi, Yang Gao, Dinggang Shen
Conventional data augmentation realized by performing simple pre-processing operations (\eg, rotation, crop, \etc) has been validated for its advantage in enhancing the performance for medical image segmentation.
no code implementations • 3 Apr 2020 • Qian Yu, Yinghuan Shi, Yefeng Zheng, Yang Gao, Jianbing Zhu, Yakang Dai
Robust segmentation for non-elongated tissues in medical images is hard to realize due to the large variation of the shape, size, and appearance of these tissues in different patients.
no code implementations • 20 Apr 2020 • Wanqi Yang, Tong Ling, Chengmei Yang, Lei Wang, Yinghuan Shi, Luping Zhou, Ming Yang
To address this issue, we propose a novel approach called Conditional ADversarial Image Translation (CADIT) to explicitly align the class distributions given samples between the two domains.
no code implementations • 31 Jul 2020 • Yinghuan Shi, Wanqi Yang, Kim-Han Thung, Hao Wang, Yang Gao, Yang Pan, Li Zhang, Dinggang Shen
Then, we build a novel computer-aided prescription model by learning the relation between observed symptoms and prescription drug.
no code implementations • 18 Jan 2021 • Xiaoting Han, Lei Qi, Qian Yu, Ziqi Zhou, Yefeng Zheng, Yinghuan Shi, Yang Gao
These typical methods usually utilize a translation network to transform images from the source domain to target domain or train the pixel-level classifier merely using translated source images and original target images.
no code implementations • 7 Feb 2021 • Zekun Li, Wei Zhao, Feng Shi, Lei Qi, Xingzhi Xie, Ying WEI, Zhongxiang Ding, Yang Gao, Shangjie Wu, Jun Liu, Yinghuan Shi, Dinggang Shen
How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world.
no code implementations • 6 Jun 2021 • Yue Wang, Lei Qi, Yinghuan Shi, Yang Gao
As a recent noticeable topic, domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaption.
no code implementations • 10 Oct 2021 • Ruiqi Wang, Lei Qi, Yinghuan Shi, Yang Gao
Also, considering inconsistent goals between generalization and pseudo-labeling: former prevents overfitting on all source domains while latter might overfit the unlabeled source domains for high accuracy, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process.
no code implementations • 24 Jan 2022 • Lei Qi, Lei Wang, Yinghuan Shi, Xin Geng
Different from the conventional data augmentation, the proposed domain-aware mix-normalization to enhance the diversity of features during training from the normalization view of the neural network, which can effectively alleviate the model overfitting to the source domains, so as to boost the generalization capability of the model in the unseen domain.
no code implementations • 12 Apr 2022 • Lei Qi, Jiaying Shen, Jiaqi Liu, Yinghuan Shi, Xin Geng
Besides, for the label distribution of each class, we further revise it to give more and equal attention to the other domains that the class does not belong to, which can effectively reduce the domain gap across different domains and obtain the domain-invariant feature.
no code implementations • CVPR 2022 • Zhen Zhao, Luping Zhou, Yue Duan, Lei Wang, Lei Qi, Yinghuan Shi
Consistency-based Semi-supervised learning (SSL) has achieved promising performance recently.
no code implementations • 11 Aug 2022 • Lei Qi, Hongpeng Yang, Yinghuan Shi, Xin Geng
To address the task, we first analyze the theory of the multi-domain learning, which highlights that 1) mitigating the impact of domain gap and 2) exploiting all samples to train the model can effectively reduce the generalization error in each source domain so as to improve the quality of pseudo-labels.
no code implementations • 6 Apr 2023 • Lei Qi, Dongjia Zhao, Yinghuan Shi, Xin Geng
By exploiting the differences between local patches of an image, our proposed PBN can effectively enhance the robustness of the model's parameters.
no code implementations • 21 Jun 2023 • Lei Qi, Ziang Liu, Yinghuan Shi, Xin Geng
Additionally, we introduce the Dropout-based Perturbation (DP) module to enhance the generalization capability of the metric network by enriching the sample-pair diversity.
no code implementations • 25 Jul 2023 • Lei Qi, Hongpeng Yang, Yinghuan Shi, Xin Geng
Our method includes two paths: the main path and the auxiliary (augmented) path.
no code implementations • 2 Aug 2023 • Dongjia Zhao, Lei Qi, Xiao Shi, Yinghuan Shi, Xin Geng
Horizontally, it applies image-level and feature-level perturbations to enhance the diversity of the training data, mitigating the issue of limited diversity in single-source domains.
no code implementations • 6 Sep 2023 • Ze Peng, Lei Qi, Yinghuan Shi, Yang Gao
Although having attributed it to training dynamics, existing theoretical explanations of activation sparsity are restricted to shallow networks, small training steps and special training, despite its emergence in deep models standardly trained for a large number of steps.
no code implementations • 12 Sep 2023 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Instead, we observe that leveraging a large learning rate can simultaneously promote weight diversity and facilitate the identification of flat regions in the loss landscape.