Search Results for author: Xinzhe Han

Found 8 papers, 6 papers with code

Greedy Gradient Ensemble for Robust Visual Question Answering

1 code implementation ICCV 2021 Xinzhe Han, Shuhui Wang, Chi Su, Qingming Huang, Qi Tian

Language bias is a critical issue in Visual Question Answering (VQA), where models often exploit dataset biases for the final decision without considering the image information.

Question Answering Visual Question Answering

Edge-featured Graph Neural Architecture Search

no code implementations3 Sep 2021 Shaofei Cai, Liang Li, Xinzhe Han, Zheng-Jun Zha, Qingming Huang

Recently, researchers study neural architecture search (NAS) to reduce the dependence of human expertise and explore better GNN architectures, but they over-emphasize entity features and ignore latent relation information concealed in the edges.

Neural Architecture Search

General Greedy De-bias Learning

1 code implementation20 Dec 2021 Xinzhe Han, Shuhui Wang, Chi Su, Qingming Huang, Qi Tian

Existing de-bias learning frameworks try to capture specific dataset bias by annotations but they fail to handle complicated OOD scenarios.

Image Classification Question Answering +1

Automatic Relation-aware Graph Network Proliferation

1 code implementation CVPR 2022 Shaofei Cai, Liang Li, Xinzhe Han, Jiebo Luo, Zheng-Jun Zha, Qingming Huang

However, the currently used graph search space overemphasizes learning node features and neglects mining hierarchical relational information.

Graph Classification Graph Learning +5

Stable Attribute Group Editing for Reliable Few-shot Image Generation

1 code implementation1 Feb 2023 Guanqi Ding, Xinzhe Han, Shuhui Wang, Xin Jin, Dandan Tu, Qingming Huang

SAGE takes use of all given few-shot images and estimates a class center embedding based on the category-relevant attribute dictionary.

Attribute Classification +1

Open-Set Knowledge-Based Visual Question Answering with Inference Paths

1 code implementation12 Oct 2023 Jingru Gan, Xinzhe Han, Shuhui Wang, Qingming Huang

Given an image and an associated textual question, the purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.

Knowledge Graphs Multi-class Classification +2

Interpretable Visual Reasoning via Probabilistic Formulation under Natural Supervision

no code implementations ECCV 2020 Xinzhe Han, Shuhui Wang, Chi Su, Weigang Zhang, Qingming Huang, Qi Tian

In this paper, we rethink implicit reasoning process in VQA, and propose a new formulation which maximizes the log-likelihood of joint distribution for the observed question and predicted answer.

Question Answering Visual Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.