Search Results for author: Fangxiang Feng

Found 11 papers, 6 papers with code

Enhanced Multi-Channel Graph Convolutional Network for Aspect Sentiment Triplet Extraction

1 code implementation ACL 2022 Hao Chen, Zepeng Zhai, Fangxiang Feng, Ruifan Li, Xiaojie Wang

Specifically, we first define ten types of relations for ASTE task, and then adopt a biaffine attention module to embed these relations as an adjacent tensor between words in a sentence.

Aspect Sentiment Triplet Extraction

GR-GAN: Gradual Refinement Text-to-image Generation

1 code implementation23 May 2022 Bo Yang, Fangxiang Feng, Xiaojie Wang

We also introduce a new metric Cross-Model Distance (CMD) for simultaneously evaluating image quality and image-text consistency.

Text Matching Text to image generation +1

Question-Driven Graph Fusion Network For Visual Question Answering

no code implementations3 Apr 2022 Yuxi Qian, Yuncong Hu, Ruonan Wang, Fangxiang Feng, Xiaojie Wang

It first models semantic, spatial, and implicit visual relations in images by three graph attention networks, then question information is utilized to guide the aggregation process of the three graphs, further, our QD-GFN adopts an object filtering mechanism to remove question-irrelevant objects contained in the image.

Graph Attention object-detection +4

Co-VQA : Answering by Interactive Sub Question Sequence

no code implementations Findings (ACL) 2022 Ruonan Wang, Yuxi Qian, Fangxiang Feng, Xiaojie Wang, Huixing Jiang

Most existing approaches to Visual Question Answering (VQA) answer questions directly, however, people usually decompose a complex question into a sequence of simple sub questions and finally obtain the answer to the original question after answering the sub question sequence(SQS).

Question Answering Visual Question Answering +2

Spot the Difference: A Cooperative Object-Referring Game in Non-Perfectly Co-Observable Scene

no code implementations16 Mar 2022 Duo Zheng, Fandong Meng, Qingyi Si, Hairun Fan, Zipeng Xu, Jie zhou, Fangxiang Feng, Xiaojie Wang

Visual dialog has witnessed great progress after introducing various vision-oriented goals into the conversation, especially such as GuessWhich and GuessWhat, where the only image is visible by either and both of the questioner and the answerer, respectively.

Visual Dialog

Dual Graph Convolutional Networks for Aspect-based Sentiment Analysis

1 code implementation ACL 2021 Ruifan Li, Hao Chen, Fangxiang Feng, Zhanyu Ma, Xiaojie Wang, Eduard Hovy

To overcome these challenges, in this paper, we propose a dual graph convolutional networks (DualGCN) model that considers the complementarity of syntax structures and semantic correlations simultaneously.

Aspect-Based Sentiment Analysis Dependency Parsing

Multi-stage Pre-training over Simplified Multimodal Pre-training Models

1 code implementation ACL 2021 Tongtong Liu, Fangxiang Feng, Xiaojie Wang

Experimental results show that our method achieves comparable performance to the original LXMERT model in all downstream tasks, and even outperforms the original model in Image-Text Retrieval task.

Answer-Driven Visual State Estimator for Goal-Oriented Visual Dialogue

1 code implementation1 Oct 2020 Zipeng Xu, Fangxiang Feng, Xiaojie Wang, Yushu Yang, Huixing Jiang, Zhongyuan Wang

In this paper, we propose an Answer-Driven Visual State Estimator (ADVSE) to impose the effects of different answers on visual states.

Question Generation Visual Dialog

Cannot find the paper you are looking for? You can Submit a new open access paper.