no code implementations • 22 Jan 2025 • Wenhao Gu, Li Gu, Ziqiang Wang, Ching Yee Suen, Yang Wang
During testing, we adapt the visual representation parameters using a self-supervised MAE loss.
1 code implementation • 16 Jul 2024 • Ziqiang Wang, Zhixiang Chi, Yanan Wu, Li Gu, Zhi Liu, Konstantinos Plataniotis, Yang Wang
Given a model trained on source data, Test-Time Adaptation (TTA) enables adaptation and inference in test data streams with domain shifts from the source.
1 code implementation • 5 May 2024 • Zhixiang Chi, Li Gu, Tao Zhong, Huan Liu, Yuanhao Yu, Konstantinos N Plataniotis, Yang Wang
In this work, we propose an approach on top of the pre-computed features of the foundation model.
Ranked #2 on
Domain Generalization
on DomainNet
1 code implementation • 8 Oct 2022 • Tao Zhong, Zhixiang Chi, Li Gu, Yang Wang, Yuanhao Yu, Jin Tang
Most existing methods perform training on multiple source domains using a single model, and the same trained model is used on all unseen target domains.
Ranked #31 on
Domain Generalization
on DomainNet
4 code implementations • 1 Oct 2022 • Li Gu, Zhixiang Chi, Huan Liu, Yuanhao Yu, Yang Wang
In this work, we present the winning solution for ORBIT Few-Shot Video Object Recognition Challenge 2022.
1 code implementation • 22 Jul 2022 • Huan Liu, Li Gu, Zhixiang Chi, Yang Wang, Yuanhao Yu, Jun Chen, Jin Tang
In this paper, we show through empirical results that adopting the data replay is surprisingly favorable.
class-incremental learning
Few-Shot Class-Incremental Learning
+2
no code implementations • CVPR 2022 • Zhixiang Chi, Li Gu, Huan Liu, Yang Wang, Yuanhao Yu, Jin Tang
The learning objective of these methods is often hand-engineered and is not directly tied to the objective (i. e. incrementally learning new classes) during testing.
class-incremental learning
Few-Shot Class-Incremental Learning
+2
1 code implementation • ICCV 2019 • Xiaohui Zeng, Renjie Liao, Li Gu, Yuwen Xiong, Sanja Fidler, Raquel Urtasun
In practice, it performs similarly to the Hungarian algorithm during inference.
no code implementations • ICML 2018 • Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel
We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN).
1 code implementation • 27 Jun 2018 • Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel
We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN).