Search Results for author: Zhongqi Yue

Found 11 papers, 11 papers with code

Few-shot Learner Parameterization by Diffusion Time-steps

1 code implementation5 Mar 2024 Zhongqi Yue, Pan Zhou, Richang Hong, Hanwang Zhang, Qianru Sun

To this end, we find an inductive bias that the time-steps of a Diffusion Model (DM) can isolate the nuanced class attributes, i. e., as the forward diffusion adds noise to an image at each time-step, nuanced attributes are usually lost at an earlier time-step than the spurious attributes that are visually prominent.

Few-Shot Learning Inductive Bias

Exploring Diffusion Time-steps for Unsupervised Representation Learning

1 code implementation21 Jan 2024 Zhongqi Yue, Jiankun Wang, Qianru Sun, Lei Ji, Eric I-Chao Chang, Hanwang Zhang

Representation learning is all about discovering the hidden modular attributes that generate the data faithfully.

Attribute counterfactual +3

Proposal-Level Unsupervised Domain Adaptation for Open World Unbiased Detector

1 code implementation4 Nov 2023 Xuanyi Liu, Zhongqi Yue, Xian-Sheng Hua

This is because the predictor is inevitably biased to the known categories, and fails under the shift in the appearance of the unseen categories.

Incremental Learning Object +4

Invariant Feature Regularization for Fair Face Recognition

2 code implementations ICCV 2023 Jiali Ma, Zhongqi Yue, Kagaya Tomoyuki, Suzuki Tomoki, Karlekar Jayashree, Sugiri Pranata, Hanwang Zhang

Unfortunately, face datasets inevitably capture the imbalanced demographic attributes that are ubiquitous in real-world observations, and the model learns biased feature that generalizes poorly in the minority group.

Face Recognition

Make the U in UDA Matter: Invariant Consistency Learning for Unsupervised Domain Adaptation

1 code implementation NeurIPS 2023 Zhongqi Yue, Hanwang Zhang, Qianru Sun

Domain Adaptation (DA) is always challenged by the spurious correlation between domain-invariant features (e. g., class identity) and domain-specific features (e. g., environment) that does not generalize to the target domain.

Unsupervised Domain Adaptation

Random Boxes Are Open-world Object Detectors

1 code implementation ICCV 2023 Yanghao Wang, Zhongqi Yue, Xian-Sheng Hua, Hanwang Zhang

First, as the randomization is independent of the distribution of the limited known objects, the random proposals become the instrumental variable that prevents the training from being confounded by the known objects.

Object object-detection +1

Unbiased Multiple Instance Learning for Weakly Supervised Video Anomaly Detection

1 code implementation CVPR 2023 Hui Lv, Zhongqi Yue, Qianru Sun, Bin Luo, Zhen Cui, Hanwang Zhang

At each MIL training iteration, we use the current detector to divide the samples into two groups with different context biases: the most confident abnormal/normal snippets and the rest ambiguous ones.

Anomaly Detection Multiple Instance Learning +1

Self-Supervised Learning Disentangled Group Representation as Feature

1 code implementation NeurIPS 2021 Tan Wang, Zhongqi Yue, Jianqiang Huang, Qianru Sun, Hanwang Zhang

A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics).

Colorization Contrastive Learning +1

Transporting Causal Mechanisms for Unsupervised Domain Adaptation

1 code implementation ICCV 2021 Zhongqi Yue, Qianru Sun, Xian-Sheng Hua, Hanwang Zhang

However, the theoretical solution provided by transportability is far from practical for UDA, because it requires the stratification and representation of the unobserved confounder that is the cause of the domain gap.

Unsupervised Domain Adaptation

Counterfactual Zero-Shot and Open-Set Visual Recognition

1 code implementation CVPR 2021 Zhongqi Yue, Tan Wang, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua

We show that the key reason is that the generation is not Counterfactual Faithful, and thus we propose a faithful one, whose generation is from the sample-specific counterfactual question: What would the sample look like, if we set its class attribute to a certain class, while keeping its sample attribute unchanged?

Attribute Binary Classification +3

Interventional Few-Shot Learning

1 code implementation NeurIPS 2020 Zhongqi Yue, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua

Specifically, we develop three effective IFSL algorithmic implementations based on the backdoor adjustment, which is essentially a causal intervention towards the SCM of many-shot learning: the upper-bound of FSL in a causal view.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.