no code implementations • 26 Jul 2024 • Ning Xu, Zhaoyang Zhang, Lei Qi, Wensuo Wang, Chao Zhang, Zihao Ren, Huaiyuan Zhang, Xin Cheng, Yanqi Zhang, Zhichao Liu, Qingwen Wei, Shiyang Wu, Lanlan Yang, Qianfeng Lu, Yiqun Ma, Mengyao Zhao, Junbo Liu, Yufan Song, Xin Geng, Jun Yang
Finally, to mitigate the hallucinations of ChipExpert, we have developed a Retrieval-Augmented Generation (RAG) system, based on the IC design knowledge base.
no code implementations • 21 Jul 2024 • Jiajun Hu, Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
To address the above issue, we propose Parameter-Efficient Group with Orthogonal regularization (PEGO) for vision transformers, which effectively preserves the generalization ability of the pre-trained network and learns more diverse knowledge compared with conventional PEFT.
1 code implementation • 16 Jul 2024 • Muyang Qiu, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
Despite the recent success of domain generalization in medical image segmentation, voxel-wise annotation for all source domains remains a huge burden.
no code implementations • 15 Jul 2024 • Yang Zhao, Di Huang, Chongxiao Li, Pengwei Jin, Ziyuan Nan, TianYun Ma, Lei Qi, Yansong Pan, Zhenxing Zhang, Rui Zhang, Xishan Zhang, Zidong Du, Qi Guo, Xing Hu, Yunji Chen
Instruction-tuned large language models (LLMs) have demonstrated remarkable performance in automatically generating code for general-purpose programming languages like Python.
no code implementations • 18 Jun 2024 • Hongpeng Pan, Shifeng Yi, Shouwei Yang, Lei Qi, Bing Hu, Yi Xu, Yang Yang
This misalignment hinders the zero-shot performance of VLM and the application of fine-tuning methods based on pseudo-labels.
1 code implementation • CVPR 2024 • Qinghe Ma, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
To fully utilize the information within the intermediate domain, we propose a symmetric Guidance training strategy (SymGD), which additionally offers direct guidance to unlabeled data by merging pseudo labels from intermediate samples.
1 code implementation • 25 Mar 2024 • Yunlong Tang, Yuxuan Wan, Lei Qi, Xin Geng
Moreover, since the Style Generation module, responsible for generating style word vectors using random sampling or style mixing, makes the model sensitive to input text prompts, we introduce a model ensemble method to mitigate this sensitivity.
1 code implementation • 18 Mar 2024 • Jintao Guo, Lei Qi, Yinghuan Shi, Yang Gao
In this paper, we study the impact of prior CNN-based augmentation methods on token-based models, revealing their performance is suboptimal due to the lack of incentivizing the model to learn holistic shape information.
1 code implementation • 17 Mar 2024 • Shumeng Li, Lei Qi, Qian Yu, Jing Huo, Yinghuan Shi, Yang Gao
Segment Anything Model (SAM) fine-tuning has shown remarkable performance in medical image segmentation in a fully supervised manner, but requires precise annotations.
1 code implementation • 29 Feb 2024 • Chenghao Li, Lei Qi, Xin Geng
In this paper, considering these two critical factors, we propose a SAM-guided Two-stream Lightweight Model for unsupervised anomaly detection (STLM) that not only aligns with the two practical application requirements but also harnesses the robust generalization capabilities of SAM.
1 code implementation • 21 Feb 2024 • Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, Jie Liu, Lei Qi, Zhiyuan Liu, Maosong Sun
Notably, the best-performing model, GPT-4V, attains an average score of 17. 97% on OlympiadBench, with a mere 10. 74% in physics, highlighting the benchmark rigor and the intricacy of physical reasoning.
1 code implementation • 11 Jan 2024 • Na Wang, Lei Qi, Jintao Guo, Yinghuan Shi, Yang Gao
2) From the feature perspective, the simple Tail Interaction module implicitly enhances potential correlations among all samples from all source domains, facilitating the acquisition of domain-invariant representations across multiple domains for the model.
1 code implementation • 28 Dec 2023 • Taicai Chen, Yue Duan, Dong Li, Lei Qi, Yinghuan Shi, Yang Gao
Based on this technique, we assign appropriate training weights to unlabeled data to enhance the construction of a discriminative latent space.
2 code implementations • 19 Dec 2023 • Yue Duan, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
While semi-supervised learning (SSL) has yielded promising results, the more realistic SSL scenario remains to be explored, in which the unlabeled data exhibits extremely high recognition difficulty, e. g., fine-grained visual classification in the context of SSL (SS-FGVC).
Fine-Grained Image Classification Semi-Supervised Image Classification
1 code implementation • 28 Nov 2023 • Xingyu Zhao, Yuexuan An, Lei Qi, Xin Geng
Most existing MLC methods are based on the assumption that the correlation of two labels in each label pair is symmetric, which is violated in many real-world scenarios.
no code implementations • 22 Nov 2023 • Lei Qi, Peng Dong, Tan Xiong, Hui Xue, Xin Geng
In this paper, we aim to solve the single-domain generalizable object detection task in urban scenarios, meaning that a model trained on images from one weather condition should be able to perform well on images from any other weather conditions.
no code implementations • 12 Sep 2023 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Instead, we observe that leveraging a large learning rate can simultaneously promote weight diversity and facilitate the identification of flat regions in the loss landscape.
1 code implementation • ICCV 2023 • Guan Gui, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
Sample adaptive augmentation (SAA) is proposed for this stated purpose and consists of two modules: 1) sample selection module; 2) sample augmentation module.
no code implementations • 6 Sep 2023 • Ze Peng, Lei Qi, Yinghuan Shi, Yang Gao
Although having attributed it to training dynamics, existing theoretical explanations of activation sparsity are restricted to shallow networks, small training steps and special training, despite its emergence in deep models standardly trained for a large number of steps.
1 code implementation • ICCV 2023 • Zekun Li, Lei Qi, Yinghuan Shi, Yang Gao
Semi-supervised learning (SSL) aims to leverage massive unlabeled data when labels are expensive to obtain.
1 code implementation • ICCV 2023 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
To deal with the domain shift between training and test samples, current methods have primarily focused on learning generalizable features during training and ignore the specificity of unseen samples that are also critical during the test.
1 code implementation • ICCV 2023 • Jintao Guo, Lei Qi, Yinghuan Shi
Deep Neural Networks have exhibited considerable success in various visual tasks.
1 code implementation • ICCV 2023 • Xiran Wang, Jian Zhang, Lei Qi, Yinghuan Shi
Domain generalization (DG) is proposed to deal with the issue of domain shift, which occurs when statistical differences exist between source and target domains.
2 code implementations • ICCV 2023 • Yue Duan, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
Semi-supervised learning (SSL) tackles the label missing problem by enabling the effective usage of unlabeled data.
1 code implementation • ICCV 2023 • Lihe Yang, Zhen Zhao, Lei Qi, Yu Qiao, Yinghuan Shi, Hengshuang Zhao
To mitigate potentially incorrect pseudo labels, recent frameworks mostly set a fixed confidence threshold to discard uncertain samples.
no code implementations • 2 Aug 2023 • Dongjia Zhao, Lei Qi, Xiao Shi, Yinghuan Shi, Xin Geng
Horizontally, it applies image-level and feature-level perturbations to enhance the diversity of the training data, mitigating the issue of limited diversity in single-source domains.
1 code implementation • 30 Jul 2023 • Heng Cai, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods.
no code implementations • 25 Jul 2023 • Lei Qi, Hongpeng Yang, Yinghuan Shi, Xin Geng
Our method includes two paths: the main path and the auxiliary (augmented) path.
no code implementations • 21 Jun 2023 • Lei Qi, Ziang Liu, Yinghuan Shi, Xin Geng
Additionally, we introduce the Dropout-based Perturbation (DP) module to enhance the generalization capability of the metric network by enriching the sample-pair diversity.
no code implementations • 6 Apr 2023 • Lei Qi, Dongjia Zhao, Yinghuan Shi, Xin Geng
By exploiting the differences between local patches of an image, our proposed PBN can effectively enhance the robustness of the model's parameters.
1 code implementation • CVPR 2023 • Heng Cai, Shumeng Li, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
Subsequently, by introducing unlabeled volumes, we propose a dual-network paradigm named Dense-Sparse Co-training (DeSCO) that exploits dense pseudo labels in early stage and sparse labels in later stage and meanwhile forces consistent output of two networks.
1 code implementation • CVPR 2023 • Jintao Guo, Na Wang, Lei Qi, Yinghuan Shi
However, the local operation of the convolution kernel makes the model focus too much on local representations (e. g., texture), which inherently causes the model more prone to overfit to the source domains and hampers its generalization ability.
1 code implementation • 1 Jan 2023 • Zhangkai Wu, Longbing Cao, Lei Qi
VAEs still suffer from uncertain tradeoff learning. We propose a novel evolutionary variational autoencoder (eVAE) building on the variational information bottleneck (VIB) theory and integrative evolutionary neural learning.
1 code implementation • CVPR 2023 • Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, where the prediction of a weakly perturbed image serves as supervision for its strongly perturbed version.
Semi-supervised Change Detection Semi-supervised Medical Image Segmentation +1
1 code implementation • 13 Aug 2022 • Ming Dai, Enhui Zheng, Jiahao Chen, Lei Qi, ZhenHua Feng, Wankou Yang
However, IR-based methods face several challenges: 1) Pre- and post-processing incur significant computational and storage overhead; 2) The lack of interaction between dual-source features impairs precise spatial perception.
no code implementations • 11 Aug 2022 • Lei Qi, Hongpeng Yang, Yinghuan Shi, Xin Geng
To address the task, we first analyze the theory of the multi-domain learning, which highlights that 1) mitigating the impact of domain gap and 2) exploiting all samples to train the model can effectively reduce the generalization error in each source domain so as to improve the quality of pseudo-labels.
1 code implementation • 9 Aug 2022 • Yue Duan, Lei Qi, Lei Wang, Luping Zhou, Yinghuan Shi
In this work, we propose Reciprocal Distribution Alignment (RDA) to address semi-supervised learning (SSL), which is a hyperparameter-free framework that is independent of confidence threshold and works with both the matched (conventionally) and the mismatched class distributions.
1 code implementation • 8 Aug 2022 • Ziqi Zhou, Lei Qi, Yinghuan Shi
We demonstrate the performance of our method on two public generalizable segmentation benchmarks in medical images, which validates our method could achieve the state-of-the-art performance.
no code implementations • 12 Apr 2022 • Lei Qi, Jiaying Shen, Jiaqi Liu, Yinghuan Shi, Xin Geng
Besides, for the label distribution of each class, we further revise it to give more and equal attention to the other domains that the class does not belong to, which can effectively reduce the domain gap across different domains and obtain the domain-invariant feature.
1 code implementation • 27 Mar 2022 • Yue Duan, Zhen Zhao, Lei Qi, Lei Wang, Luping Zhou, Yinghuan Shi, Yang Gao
The core issue in semi-supervised learning (SSL) lies in how to effectively leverage unlabeled data, whereas most existing methods tend to put a great emphasis on the utilization of high-confidence samples yet seldom fully explore the usage of low-confidence samples.
no code implementations • 24 Jan 2022 • Lei Qi, Lei Wang, Yinghuan Shi, Xin Geng
Different from the conventional data augmentation, the proposed domain-aware mix-normalization to enhance the diversity of features during training from the normalization view of the neural network, which can effectively alleviate the model overfitting to the source domains, so as to boost the generalization capability of the model in the unseen domain.
no code implementations • CVPR 2022 • Zhen Zhao, Luping Zhou, Yue Duan, Lei Wang, Lei Qi, Yinghuan Shi
Consistency-based Semi-supervised learning (SSL) has achieved promising performance recently.
1 code implementation • 23 Dec 2021 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Beyond the training stage, overfitting could also cause unstable prediction in the test stage.
1 code implementation • CVPR 2022 • Ziqi Zhou, Lei Qi, Xin Yang, Dong Ni, Yinghuan Shi
For medical image segmentation, imagine if a model was only trained using MR images in source domain, how about its performance to directly segment CT images in target domain?
1 code implementation • 7 Dec 2021 • Jintao Guo, Lei Qi, Yinghuan Shi, Yang Gao
Particularly, the proposed method can generate a variety of data variants to better deal with the overfitting issue.
1 code implementation • 30 Nov 2021 • Lei Qi, Jiaqi Liu, Lei Wang, Yinghuan Shi, Xin Geng
A significance of our work lies in that it shows the potential of unsupervised domain generalization for person ReID and sets a strong baseline for the further research on this topic.
1 code implementation • 17 Oct 2021 • Yinghuan Shi, Jian Zhang, Tong Ling, Jiwen Lu, Yefeng Zheng, Qian Yu, Lei Qi, Yang Gao
In semi-supervised medical image segmentation, most previous works draw on the common assumption that higher entropy means higher uncertainty.
no code implementations • 10 Oct 2021 • Ruiqi Wang, Lei Qi, Yinghuan Shi, Yang Gao
Also, considering inconsistent goals between generalization and pseudo-labeling: former prevents overfitting on all source domains while latter might overfit the unlabeled source domains for high accuracy, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process.
1 code implementation • 24 Jul 2021 • Qian Yu, Lei Qi, Luping Zhou, Lei Wang, Yilong Yin, Yinghuan Shi, Wuzhang Wang, Yang Gao
Together, the above two schemes give rise to a novel double-branch encoder segmentation framework for medical image segmentation, namely Crosslink-Net.
1 code implementation • 12 Jun 2021 • Qiufeng Wang, Xin Geng, Shuxia Lin, Shiyu Xia, Lei Qi, Ning Xu
Moreover, the learngene, i. e., the gene for learning initialization rules of the target model, is proposed to inherit the meta-knowledge from the collective model and reconstruct a lightweight individual model on the target task.
1 code implementation • CVPR 2022 • Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao
In this work, we first construct a strong baseline of self-training (namely ST) for semi-supervised semantic segmentation via injecting strong data augmentations (SDA) on unlabeled images to alleviate overfitting noisy labels as well as decouple similar predictions between the teacher and student.
no code implementations • 6 Jun 2021 • Yue Wang, Lei Qi, Yinghuan Shi, Yang Gao
As a recent noticeable topic, domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaption.
1 code implementation • ICCV 2021 • Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao
Our method aims to alleviate this problem and enhance the feature embedding on latent novel classes.
Ranked #41 on Few-Shot Semantic Segmentation on PASCAL-5i (5-Shot)
no code implementations • 7 Feb 2021 • Zekun Li, Wei Zhao, Feng Shi, Lei Qi, Xingzhi Xie, Ying WEI, Zhongxiang Ding, Yang Gao, Shangjie Wu, Jun Liu, Yinghuan Shi, Dinggang Shen
How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world.
no code implementations • 18 Jan 2021 • Xiaoting Han, Lei Qi, Qian Yu, Ziqi Zhou, Yefeng Zheng, Yinghuan Shi, Yang Gao
These typical methods usually utilize a translation network to transform images from the source domain to target domain or train the pixel-level classifier merely using translated source images and original target images.
1 code implementation • 27 Mar 2020 • Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
Semantic segmentation in a supervised learning manner has achieved significant progress in recent years.
no code implementations • 23 Nov 2019 • Pinzhuo Tian, Zhangkai Wu, Lei Qi, Lei Wang, Yinghuan Shi, Yang Gao
To address the annotation scarcity issue in some cases of semantic segmentation, there have been a few attempts to develop the segmentation model in the few-shot learning paradigm.
1 code implementation • 16 Nov 2019 • Wenbin Li, Lei Wang, Xingxing Zhang, Lei Qi, Jing Huo, Yang Gao, Jiebo Luo
(2) how to narrow the distribution gap between clean and adversarial examples under the few-shot setting?
no code implementations • 15 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Yang Gao
In this paper, we focus on the semi-supervised person re-identification (Re-ID) case, which only has the intra-camera (within-camera) labels but not inter-camera (cross-camera) labels.
no code implementations • 14 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Yang Gao
Moreover, in the training process, we adopt the joint learning scheme to simultaneously train each branch by the independent loss function, which can enhance the generalization ability of each branch.
no code implementations • 2 Aug 2019 • Lei Qi, Lei Wang, Jing Huo, Yinghuan Shi, Xin Geng, Yang Gao
To achieve the camera alignment, we develop a Multi-Camera Adversarial Learning (MCAL) to map images of different cameras into a shared subspace.
no code implementations • ICCV 2019 • Lei Qi, Lei Wang, Jing Huo, Luping Zhou, Yinghuan Shi, Yang Gao
For the first issue, we highlight the presence of camera-level sub-domains as a unique characteristic of person Re-ID, and develop camera-aware domain adaptation to reduce the discrepancy not only between source and target domains but also across these sub-domains.
Ranked #20 on Unsupervised Domain Adaptation on Market to Duke
no code implementations • 11 Apr 2018 • Lei Qi, Jing Huo, Lei Wang, Yinghuan Shi, Yang Gao
Lastly, considering person retrieval is a special image retrieval task, we propose a novel ranking loss to optimize the whole network.