Search Results for author: Peng Fu

Found 20 papers, 12 papers with code

Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

1 code implementation10 Oct 2022 Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets.

Question Answering Visual Question Answering

Check It Again: Progressive Visual Question Answering via Visual Entailment

1 code implementation8 Jun 2021 Qingyi Si, Zheng Lin, Mingyu Zheng, Peng Fu, Weiping Wang

Besides, they only explore the interaction between image and question, ignoring the semantics of candidate answers.

Question Answering Visual Entailment +1

A Hierarchical Transformer with Speaker Modeling for Emotion Recognition in Conversation

1 code implementation29 Dec 2020 Jiangnan Li, Zheng Lin, Peng Fu, Qingyi Si, Weiping Wang

It can be regarded as a personalized and interactive emotion recognition task, which is supposed to consider not only the semantic information of text but also the influences from speakers.

Emotion Recognition in Conversation

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

1 code implementation11 Oct 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance.

Natural Language Understanding

Learning Class-Transductive Intent Representations for Zero-shot Intent Detection

1 code implementation3 Dec 2020 Qingyi Si, Yuanxin Liu, Peng Fu, Zheng Lin, Jiangnan Li, Weiping Wang

A critical problem behind these limitations is that the representations of unseen intents cannot be learned in the training stage.

Intent Detection Multi-Task Learning +1

Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training

1 code implementation NAACL 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

Firstly, we discover that the success of magnitude pruning can be attributed to the preserved pre-training performance, which correlates with the downstream transferability.

Transfer Learning

Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning

1 code implementation10 Oct 2022 Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples).

Contrastive Learning Question Answering +1

Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering

1 code implementation26 Oct 2022 Qingyi Si, Yuanxin Liu, Zheng Lin, Peng Fu, Weiping Wang

To this end, we systematically study the design of a training and compression pipeline to search the subnetworks, as well as the assignment of sparsity to different modality-specific modules.

Question Answering Visual Question Answering

Target Really Matters: Target-aware Contrastive Learning and Consistency Regularization for Few-shot Stance Detection

1 code implementation COLING 2022 Rui Liu, Zheng Lin, Huishan Ji, Jiangnan Li, Peng Fu, Weiping Wang

Despite the significant progress on this task, it is extremely time-consuming and budget-unfriendly to collect sufficient high-quality labeled data for every new target under fully-supervised learning, whereas unlabeled data can be collected easier.

Contrastive Learning Few-Shot Stance Detection

Hyperspectral Image Classification Method Based on 2D–3D CNN and Multibranch Feature Fusion

no code implementations18 Sep 2020 Zixian Ge, Guo Cao, Xuesong Li, Peng Fu

Then, by means of the multibranch neural network, three kinds of features from shallow to deep are extracted and fused in the spectral dimension.

Classification General Classification +1

Modeling Intra and Inter-modality Incongruity for Multi-Modal Sarcasm Detection

no code implementations Findings of the Association for Computational Linguistics 2020 Hongliang Pan, Zheng Lin, Peng Fu, Yatao Qi, Weiping Wang

Inspired by this, we propose a BERT architecture-based model, which concentrates on both intra and inter-modality incongruity for multi-modal sarcasm detection.

Sarcasm Detection

Question-Interlocutor Scope Realized Graph Modeling over Key Utterances for Dialogue Reading Comprehension

no code implementations26 Oct 2022 Jiangnan Li, Mo Yu, Fandong Meng, Zheng Lin, Peng Fu, Weiping Wang, Jie zhou

Although these tasks are effective, there are still urging problems: (1) randomly masking speakers regardless of the question cannot map the speaker mentioned in the question to the corresponding speaker in the dialogue, and ignores the speaker-centric nature of utterances.

Reading Comprehension

Revisiting the Knowledge Injection Frameworks

no code implementations2 Nov 2023 Peng Fu, Yiming Zhang, Haobo Wang, Weikang Qiu, Junbo Zhao

Briefly, the core of this technique is rooted in an ideological emphasis on the pruning and purification of the external knowledge base to be injected into LLMs.

Object Attribute Matters in Visual Question Answering

no code implementations20 Dec 2023 Peize Li, Qingyi Si, Peng Fu, Zheng Lin, Yan Wang

In this paper, we propose a novel VQA approach from the perspective of utilizing object attribute, aiming to achieve better object-level visual-language alignment and multimodal scene understanding.

Attribute Knowledge Distillation +5

Are Large Language Models Table-based Fact-Checkers?

no code implementations4 Feb 2024 Hangwen Zhang, Qingyi Si, Peng Fu, Zheng Lin, Weiping Wang

Finally, we analyze some possible directions to promote the accuracy of TFV via LLMs, which is beneficial to further research of table reasoning.

Fact Verification In-Context Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.