Search Results for author: Chengxiang Yin

Found 6 papers, 0 papers with code

Multi-Clue Reasoning with Memory Augmentation for Knowledge-based Visual Question Answering

no code implementations20 Dec 2023 Chengxiang Yin, Zhengping Che, Kun Wu, Zhiyuan Xu, Jian Tang

Visual Question Answering (VQA) has emerged as one of the most challenging tasks in artificial intelligence due to its multi-modal nature.

Question Answering Visual Question Answering

Cross-Modal Reasoning with Event Correlation for Video Question Answering

no code implementations20 Dec 2023 Chengxiang Yin, Zhengping Che, Kun Wu, Zhiyuan Xu, Qinru Qiu, Jian Tang

Video Question Answering (VideoQA) is a very attractive and challenging research direction aiming to understand complex semantics of heterogeneous data from two domains, i. e., the spatio-temporal video content and the word sequence in question.

Question Answering Video Question Answering

Continual Few-Shot Learning with Adversarial Class Storage

no code implementations10 Jul 2022 Kun Wu, Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang, Dejun Yang

In this paper, we define a new problem called continual few-shot learning, in which tasks arrive sequentially and each task is associated with a few training samples.

continual few-shot learning Few-Shot Learning +1

Human Pose Transfer with Augmented Disentangled Feature Consistency

no code implementations23 Jul 2021 Kun Wu, Chengxiang Yin, Zhengping Che, Bo Jiang, Jian Tang, Zheng Guan, Gangyi Ding

Deep generative models have made great progress in synthesizing images with arbitrary human poses and transferring poses of one person to others.

Data Augmentation Pose Transfer

Hierarchical Graph Attention Network for Few-Shot Visual-Semantic Learning

no code implementations ICCV 2021 Chengxiang Yin, Kun Wu, Zhengping Che, Bo Jiang, Zhiyuan Xu, Jian Tang

Deep learning has made tremendous success in computer vision, natural language processing and even visual-semantic learning, which requires a huge amount of labeled training data.

Graph Attention Image Captioning +2

Adversarial Meta-Learning

no code implementations8 Jun 2018 Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang

Meta-learning enables a model to learn from very limited data to undertake a new task.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.