1 code implementation • 5 Mar 2024 • Zhongqi Yue, Pan Zhou, Richang Hong, Hanwang Zhang, Qianru Sun
To this end, we find an inductive bias that the time-steps of a Diffusion Model (DM) can isolate the nuanced class attributes, i. e., as the forward diffusion adds noise to an image at each time-step, nuanced attributes are usually lost at an earlier time-step than the spurious attributes that are visually prominent.
1 code implementation • 21 Jan 2024 • Zhongqi Yue, Jiankun Wang, Qianru Sun, Lei Ji, Eric I-Chao Chang, Hanwang Zhang
Representation learning is all about discovering the hidden modular attributes that generate the data faithfully.
1 code implementation • 4 Nov 2023 • Xuanyi Liu, Zhongqi Yue, Xian-Sheng Hua
This is because the predictor is inevitably biased to the known categories, and fails under the shift in the appearance of the unseen categories.
2 code implementations • ICCV 2023 • Jiali Ma, Zhongqi Yue, Kagaya Tomoyuki, Suzuki Tomoki, Karlekar Jayashree, Sugiri Pranata, Hanwang Zhang
Unfortunately, face datasets inevitably capture the imbalanced demographic attributes that are ubiquitous in real-world observations, and the model learns biased feature that generalizes poorly in the minority group.
1 code implementation • NeurIPS 2023 • Zhongqi Yue, Hanwang Zhang, Qianru Sun
Domain Adaptation (DA) is always challenged by the spurious correlation between domain-invariant features (e. g., class identity) and domain-specific features (e. g., environment) that does not generalize to the target domain.
1 code implementation • ICCV 2023 • Yanghao Wang, Zhongqi Yue, Xian-Sheng Hua, Hanwang Zhang
First, as the randomization is independent of the distribution of the limited known objects, the random proposals become the instrumental variable that prevents the training from being confounded by the known objects.
1 code implementation • CVPR 2023 • Hui Lv, Zhongqi Yue, Qianru Sun, Bin Luo, Zhen Cui, Hanwang Zhang
At each MIL training iteration, we use the current detector to divide the samples into two groups with different context biases: the most confident abnormal/normal snippets and the rest ambiguous ones.
1 code implementation • NeurIPS 2021 • Tan Wang, Zhongqi Yue, Jianqiang Huang, Qianru Sun, Hanwang Zhang
A good visual representation is an inference map from observations (images) to features (vectors) that faithfully reflects the hidden modularized generative factors (semantics).
1 code implementation • ICCV 2021 • Zhongqi Yue, Qianru Sun, Xian-Sheng Hua, Hanwang Zhang
However, the theoretical solution provided by transportability is far from practical for UDA, because it requires the stratification and representation of the unobserved confounder that is the cause of the domain gap.
1 code implementation • CVPR 2021 • Zhongqi Yue, Tan Wang, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua
We show that the key reason is that the generation is not Counterfactual Faithful, and thus we propose a faithful one, whose generation is from the sample-specific counterfactual question: What would the sample look like, if we set its class attribute to a certain class, while keeping its sample attribute unchanged?
1 code implementation • NeurIPS 2020 • Zhongqi Yue, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua
Specifically, we develop three effective IFSL algorithmic implementations based on the backdoor adjustment, which is essentially a causal intervention towards the SCM of many-shot learning: the upper-bound of FSL in a causal view.