Search Results for author: Junfeng Tian

Found 13 papers, 5 papers with code

mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections

1 code implementation24 May 2022 Chenliang Li, Haiyang Xu, Junfeng Tian, Wei Wang, Ming Yan, Bin Bi, Jiabo Ye, Hehong Chen, Guohai Xu, Zheng Cao, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou, Luo Si

Large-scale pretrained foundation models have been an emerging paradigm for building artificial intelligence (AI) systems, which can be quickly adapted to a wide range of downstream tasks.

Image Captioning Question Answering +5

WikiDiverse: A Multimodal Entity Linking Dataset with Diversified Contextual Topics and Entity Types

1 code implementation ACL 2022 Xuwu Wang, Junfeng Tian, Min Gui, Zhixu Li, Rui Wang, Ming Yan, Lihan Chen, Yanghua Xiao

In this paper, we present WikiDiverse, a high-quality human-annotated MEL dataset with diversified contextual topics and entity types from Wikinews, which uses Wikipedia as the corresponding knowledge base.

Entity Linking

Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding

1 code implementation CVPR 2022 Jiabo Ye, Junfeng Tian, Ming Yan, Xiaoshan Yang, Xuwu Wang, Ji Zhang, Liang He, Xin Lin

Moreover, since the backbones are query-agnostic, it is difficult to completely avoid the inconsistency issue by training the visual backbone end-to-end in the visual grounding framework.

Visual Grounding

Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training

no code implementations21 Aug 2021 Ming Yan, Haiyang Xu, Chenliang Li, Bin Bi, Junfeng Tian, Min Gui, Wei Wang

Existing approaches to vision-language pre-training (VLP) heavily rely on an object detector based on bounding boxes (regions), where salient objects are first detected from images and then a Transformer-based model is used for cross-modal fusion.

object-detection Object Detection

SentiX: A Sentiment-Aware Pre-Trained Model for Cross-Domain Sentiment Analysis

1 code implementation COLING 2020 Jie zhou, Junfeng Tian, Rui Wang, Yuanbin Wu, Wenming Xiao, Liang He

However, due to the variety of users{'} emotional expressions across domains, fine-tuning the pre-trained models on the source domain tends to overfit, leading to inferior results on the target domain.

Language Modelling Sentiment Analysis

Multi-Domain Dialogue Acts and Response Co-Generation

1 code implementation ACL 2020 Kai Wang, Junfeng Tian, Rui Wang, Xiaojun Quan, Jianxing Yu

Unlike those pipeline approaches, our act generation module preserves the semantic structures of multi-domain dialogue acts and our response generation module dynamically attends to different acts as needed.

Response Generation Task-Oriented Dialogue Systems

Attention Optimization for Abstractive Document Summarization

no code implementations IJCNLP 2019 Min Gui, Junfeng Tian, Rui Wang, Zhenglu Yang

Attention plays a key role in the improvement of sequence-to-sequence-based document summarization models.

Document Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.