Search Results for author: Fuli Luo

Found 29 papers, 14 papers with code

S^4-Tuning: A Simple Cross-lingual Sub-network Tuning Method

no code implementations ACL 2022 Runxin Xu, Fuli Luo, Baobao Chang, Songfang Huang, Fei Huang

The emergence of multilingual pre-trained language models makes it possible to adapt to target languages with only few labeled examples. However, vanilla fine-tuning tends to achieve degenerated and unstable results, owing to the Language Interference among different languages, and Parameter Overload under the few-sample transfer learning scenarios. To address two problems elegantly, we propose S^4-Tuning, a Simple Cross-lingual Sub-network Tuning method.

Transfer Learning

Rethinking Denoised Auto-Encoding in Language Pre-Training

no code implementations EMNLP 2021 Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu sun, Songfang Huang, Fei Huang

Pre-trained self-supervised models such as BERT have achieved striking success in learning sequence representations, especially for natural language processing.

Natural Language Understanding

Parameter-Efficient Sparsity for Large Language Models Fine-Tuning

2 code implementations23 May 2022 Yuchao Li, Fuli Luo, Chuanqi Tan, Mengdi Wang, Songfang Huang, Shen Li, Junjie Bai

With the dramatically increased number of parameters in language models, sparsity methods have received ever-increasing research focus to compress and accelerate the models.

Towards Unified Prompt Tuning for Few-shot Text Classification

1 code implementation11 May 2022 Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao

Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.

Classification Few-Shot Learning +6

On Effectively Learning of Knowledge in Continual Pre-training

no code implementations17 Apr 2022 Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang

Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks.

Self-Supervised Learning

Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency

no code implementations ACL 2022 Yanyang Li, Fuli Luo, Runxin Xu, Songfang Huang, Fei Huang, LiWei Wang

Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts.

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

1 code implementation1 Apr 2022 Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, Jun Huang

Pre-trained Language Models (PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the fine-tuning process based on labeled training data.

Contrastive Learning

From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

2 code implementations14 Dec 2021 Runxin Xu, Fuli Luo, Chengyu Wang, Baobao Chang, Jun Huang, Songfang Huang, Fei Huang

Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge.

Contrastive Learning Language Modelling +1

SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels

no code implementations14 Mar 2021 Chenliang Li, Ming Yan, Haiyang Xu, Fuli Luo, Wei Wang, Bin Bi, Songfang Huang

Vision-language pre-training (VLP) on large-scale image-text pairs has recently witnessed rapid progress for learning cross-modal representations.

VECO: Variable and Flexible Cross-lingual Pre-training for Language Understanding and Generation

1 code implementation ACL 2021 Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si

Existing work in multilingual pretraining has demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.

Language Modelling Question Answering +2

CAPT: Contrastive Pre-Training for Learning Denoised Sequence Representations

no code implementations13 Oct 2020 Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu sun

Pre-trained self-supervised models such as BERT have achieved striking success in learning sequence representations, especially for natural language processing.

Natural Language Understanding

VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation

no code implementations28 Sep 2020 Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si

Recent studies about learning multilingual representations have achieved significant performance gains across a wide range of downstream cross-lingual tasks.

Language Modelling Masked Language Modeling +3

Pun-GAN: Generative Adversarial Network for Pun Generation

1 code implementation IJCNLP 2019 Fuli Luo, Shunyao Li, Pengcheng Yang, Lei LI, Baobao Chang, Zhifang Sui, Xu sun

It consists of a generator to produce pun sentences, and a discriminator to distinguish between the generated pun sentences and the real sentences with specific word senses.

Towards Comprehensive Description Generation from Factual Attribute-value Tables

no code implementations ACL 2019 Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, Zhifang Sui

To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing.

Learning to Control the Fine-grained Sentiment for Story Ending Generation

no code implementations ACL 2019 Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, Xu sun

Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges.

Text Generation

A Deep Reinforced Sequence-to-Set Model for Multi-Label Classification

1 code implementation ACL 2019 Pengcheng Yang, Fuli Luo, Shuming Ma, Junyang Lin, Xu sun

In this way, we can reduce the dependence of the model on the label order, as well as capture high-order correlations between labels.

General Classification Multi-Label Classification

MAAM: A Morphology-Aware Alignment Model for Unsupervised Bilingual Lexicon Induction

no code implementations ACL 2019 Pengcheng Yang, Fuli Luo, Peng Chen, Tianyu Liu, Xu sun

The task of unsupervised bilingual lexicon induction (UBLI) aims to induce word translations from monolingual corpora in two languages.

Bilingual Lexicon Induction Denoising +2

Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information

1 code implementation ACL 2019 Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei LI, Chengyang Huang, Xu sun

Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms.

Enhancing Topic-to-Essay Generation with External Commonsense Knowledge

no code implementations ACL 2019 Pengcheng Yang, Lei LI, Fuli Luo, Tianyu Liu, Xu sun

Experiments show that with external commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent than existing methods in terms of both automatic and human evaluation.

Concept-To-Text Generation

A Hierarchical Reinforced Sequence Operation Method for Unsupervised Text Style Transfer

1 code implementation ACL 2019 Chen Wu, Xuancheng Ren, Fuli Luo, Xu sun

Unsupervised text style transfer aims to alter text styles while preserving the content, without aligned data for supervision.

Style Transfer Text Style Transfer +1

A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer

2 code implementations24 May 2019 Fuli Luo, Peng Li, Jie zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, Xu sun

Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.

reinforcement-learning Text Style Transfer +1

Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy

no code implementations1 Nov 2018 Pengcheng Yang, Fuli Luo, Shuangzhi Wu, Jingjing Xu, Dong-dong Zhang, Xu sun

In order to avoid such sophisticated alternate optimization, we propose to learn unsupervised word mapping by directly maximizing the mean discrepancy between the distribution of transferred embedding and target embedding.

Cross-Lingual Word Embeddings Density Estimation +4

Incorporating Glosses into Neural Word Sense Disambiguation

1 code implementation ACL 2018 Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, Zhifang Sui

GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods.

Word Sense Disambiguation

Cannot find the paper you are looking for? You can Submit a new open access paper.