Search Results for author: Yuanxin Liu

Found 15 papers, 10 papers with code

COST-EFF: Collaborative Optimization of Spatial and Temporal Efficiency with Slenderized Multi-exit Language Models

1 code implementation27 Oct 2022 Bowen Shen, Zheng Lin, Yuanxin Liu, Zhengxiao Liu, Lei Wang, Weiping Wang

Motivated by such considerations, we propose a collaborative optimization for PLMs that integrates static model compression and dynamic inference acceleration.

Model Compression

Compressing And Debiasing Vision-Language Pre-Trained Models for Visual Question Answering

no code implementations26 Oct 2022 Qingyi Si, Yuanxin Liu, Zheng Lin, Peng Fu, Weiping Wang

To facilitate the application of VLP to VQA tasks, it is imperative to jointly study VLP compression and OOD robustness, which, however, has not yet been explored.

Question Answering Visual Question Answering (VQA)

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

1 code implementation11 Oct 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance.

Natural Language Understanding

Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

1 code implementation10 Oct 2022 Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets.

Question Answering Visual Question Answering (VQA)

Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning

1 code implementation10 Oct 2022 Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples).

Contrastive Learning Question Answering +1

Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training

1 code implementation NAACL 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

Firstly, we discover that the success of magnitude pruning can be attributed to the preserved pre-training performance, which correlates with the downstream transferability.

Transfer Learning

Marginal Utility Diminishes: Exploring the Minimum Knowledge for BERT Knowledge Distillation

1 code implementation ACL 2021 Yuanxin Liu, Fandong Meng, Zheng Lin, Weiping Wang, Jie zhou

In this paper, however, we observe that although distilling the teacher's hidden state knowledge (HSK) is helpful, the performance gain (marginal utility) diminishes quickly as more HSK is distilled.

Knowledge Distillation

ROSITA: Refined BERT cOmpreSsion with InTegrAted techniques

1 code implementation21 Mar 2021 Yuanxin Liu, Zheng Lin, Fengcheng Yuan

Based on the empirical findings, our best compressed model, dubbed Refined BERT cOmpreSsion with InTegrAted techniques (ROSITA), is $7. 5 \times$ smaller than BERT while maintains $98. 5\%$ of the performance on five tasks of the GLUE benchmark, outperforming the previous BERT compression methods with similar parameter budget.

Knowledge Distillation

Learning Class-Transductive Intent Representations for Zero-shot Intent Detection

1 code implementation3 Dec 2020 Qingyi Si, Yuanxin Liu, Peng Fu, Zheng Lin, Jiangnan Li, Weiping Wang

A critical problem behind these limitations is that the representations of unseen intents cannot be learned in the training stage.

Intent Detection Multi-Task Learning +1

Exploring and Distilling Cross-Modal Information for Image Captioning

no code implementations28 Feb 2020 Fenglin Liu, Xuancheng Ren, Yuanxin Liu, Kai Lei, Xu sun

Recently, attention-based encoder-decoder models have been used extensively in image captioning.

Image Captioning

Unsupervised Pre-training for Natural Language Generation: A Literature Review

no code implementations13 Nov 2019 Yuanxin Liu, Zheng Lin

They are classified into architecture-based methods and strategy-based methods, based on their way of handling the above obstacle.

Natural Language Understanding Text Generation +1

Self-Adaptive Scaling for Learnable Residual Structure

no code implementations CONLL 2019 Fenglin Liu, Meng Gao, Yuanxin Liu, Kai Lei

Residual has been widely applied to build deep neural networks with enhanced feature propagation and improved accuracy.

Image Captioning Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.