no code implementations • 21 Jul 2024 • Yunyi Xuan, WeiJie Chen, Shicai Yang, Di Xie, Luojun Lin, Yueting Zhuang
In this paper, we discuss the extension of DFKD to Vision-Language Foundation Models without access to the billion-level image-text datasets.
no code implementations • 25 Oct 2023 • WeiJie Chen, Haoyu Wang, Shicai Yang, Lei Zhang, Wei Wei, Yanning Zhang, Luojun Lin, Di Xie, Yueting Zhuang
Such a one-for-all adaptation paradigm allows us to adapt anything in the world using only one text-to-image generator as well as the corresponding unlabeled target data.
no code implementations • 12 Jan 2023 • Wei Zhao, Binbin Chen, WeiJie Chen, Shicai Yang, Di Xie, ShiLiang Pu, Yueting Zhuang
The domain adaptation part is implemented as a Source-Free Domain Adaptation paradigm, which only uses the pre-trained model and the unlabeled target data to further optimize in a self-supervised training manner.
no code implementations • 12 Jan 2023 • Yilu Guo, Xingyue Shi, WeiJie Chen, Shicai Yang, Di Xie, ShiLiang Pu, Yueting Zhuang
In the test-time training stage, we use the pre-trained model to assign noisy label for the unlabeled target data, and propose a Label-Periodically-Updated DivideMix method for noisy label learning.
no code implementations • ICCV 2023 • Weizhen He, WeiJie Chen, Binbin Chen, Shicai Yang, Di Xie, Luojun Lin, Donglian Qi, Yueting Zhuang
In this paper, we delve into this problem and propose an Unsupervised Prompt Tuning framework for text-driven object detection, which is composed of two novel mean teaching mechanisms.
1 code implementation • 9 Oct 2022 • Rang Meng, Xianfeng Li, WeiJie Chen, Shicai Yang, Jie Song, Xinchao Wang, Lei Zhang, Mingli Song, Di Xie, ShiLiang Pu
Under this guidance, a novel Attention Diversification framework is proposed, in which Intra-Model and Inter-Model Attention Diversification Regularization are collaborated to reassign appropriate attention to diverse task-related features.
1 code implementation • CVPR 2022 • Rang Meng, WeiJie Chen, Shicai Yang, Jie Song, Luojun Lin, Di Xie, ShiLiang Pu, Xinchao Wang, Mingli Song, Yueting Zhuang
In this paper, we introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank, from which models of different capacities can be sampled to accommodate different accuracy-efficiency trade-offs.
3 code implementations • CVPR 2022 • Binbin Chen, WeiJie Chen, Shicai Yang, Yunyi Xuan, Jie Song, Di Xie, ShiLiang Pu, Mingli Song, Yueting Zhuang
To remedy this issue, we present a novel label assignment mechanism for self-training framework, namely proposal self-assignment, which injects the proposals from student into teacher and generates accurate pseudo labels to match each proposal in the student model accordingly.
no code implementations • 13 Jun 2022 • Yilu Guo, Shicai Yang, WeiJie Chen, Liang Ma, Di Xie, ShiLiang Pu
Therefore, it is crucial to study how to learn more discriminative representations while avoiding over-fitting.
2 code implementations • 13 Jun 2022 • Meilin Chen, WeiJie Chen, Shicai Yang, Jie Song, Xinchao Wang, Lei Zhang, Yunfeng Yan, Donglian Qi, Yueting Zhuang, Di Xie, ShiLiang Pu
In addition, we conduct anchor adaptation in parallel with localization adaptation, since anchor can be regarded as a learnable parameter.
no code implementations • 13 Jun 2022 • Junchu Huang, WeiJie Chen, Shicai Yang, Di Xie, ShiLiang Pu, Yueting Zhuang
This framework can reduce the impact of noisy labels from CLIP model effectively by combining both techniques.
1 code implementation • 27 May 2022 • Zhishu Sun, Zhifeng Shen, Luojun Lin, Yuanlong Yu, Zhifeng Yang, Shicai Yang, WeiJie Chen
Specifically, we leverage a meta-adjuster to twist the network parameters based on the static model with respect to different data from different domains.
Ranked #26 on Domain Generalization on DomainNet
no code implementations • ICCV 2021 • Jing Hao, Zhixin Zhang, Shicai Yang, Di Xie, ShiLiang Pu
Nowadays advanced image editing tools and technical skills produce tampered images more realistically, which can easily evade image forensic systems and make authenticity verification of images more difficult.
no code implementations • 23 Feb 2021 • WeiJie Chen, Luojun Lin, Shicai Yang, Di Xie, ShiLiang Pu, Yueting Zhuang, Wenqi Ren
Usually, the given source domain pre-trained model is expected to optimize with only unlabeled target data, which is termed as source-free unsupervised domain adaptation.
no code implementations • 1 Feb 2021 • WeiJie Chen, Yilu Guo, Shicai Yang, Zhaoyang Li, Zhenxin Ma, Binbin Chen, Long Zhao, Di Xie, ShiLiang Pu, Yueting Zhuang
Therefore, it yields our attention to suppress false positive in each target domain in an unsupervised way.
no code implementations • 10 Dec 2020 • Xianfeng Li, WeiJie Chen, Di Xie, Shicai Yang, Peng Yuan, ShiLiang Pu, Yueting Zhuang
However, it is difficult to evaluate the quality of pseudo labels since no labels are available in target domain.
1 code implementation • 20 Jun 2020 • Wei-Jie Chen, ShiLiang Pu, Di Xie, Shicai Yang, Yilu Guo, Luojun Lin
Extensive experiments on ImageNet dataset have been conducted to prove the effectiveness of our method.
no code implementations • 21 Nov 2019 • Jiaxu Chen, Jing Hao, Kai Chen, Di Xie, Shicai Yang, ShiLiang Pu
This paper introduces an end-to-end audio classification system based on raw waveforms and mix-training strategy.
no code implementations • 30 Oct 2017 • Qiaoyong Zhong, Chao Li, Yingying Zhang, Di Xie, Shicai Yang, ShiLiang Pu
Deep region-based object detector consists of a region proposal step and a deep object recognition step.