no code implementations • EMNLP 2021 • Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu sun, Songfang Huang, Fei Huang
Pre-trained self-supervised models such as BERT have achieved striking success in learning sequence representations, especially for natural language processing.
no code implementations • ACL 2022 • Runxin Xu, Fuli Luo, Baobao Chang, Songfang Huang, Fei Huang
The emergence of multilingual pre-trained language models makes it possible to adapt to target languages with only few labeled examples. However, vanilla fine-tuning tends to achieve degenerated and unstable results, owing to the Language Interference among different languages, and Parameter Overload under the few-sample transfer learning scenarios. To address two problems elegantly, we propose S^4-Tuning, a Simple Cross-lingual Sub-network Tuning method.
2 code implementations • 23 May 2022 • Yuchao Li, Fuli Luo, Chuanqi Tan, Mengdi Wang, Songfang Huang, Shen Li, Junjie Bai
With the dramatically increased number of parameters in language models, sparsity methods have received ever-increasing research focus to compress and accelerate the models.
1 code implementation • 11 May 2022 • Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.
no code implementations • 17 Apr 2022 • Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang
Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks.
no code implementations • ACL 2022 • Yanyang Li, Fuli Luo, Runxin Xu, Songfang Huang, Fei Huang, LiWei Wang
Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts.
1 code implementation • 1 Apr 2022 • Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, Jun Huang
Pre-trained Language Models (PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the fine-tuning process based on labeled training data.
2 code implementations • 14 Dec 2021 • Runxin Xu, Fuli Luo, Chengyu Wang, Baobao Chang, Jun Huang, Songfang Huang, Fei Huang
Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge.
3 code implementations • EMNLP 2021 • Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, Fei Huang
Recent pretrained language models extend from millions to billions of parameters.
no code implementations • 14 Mar 2021 • Chenliang Li, Ming Yan, Haiyang Xu, Fuli Luo, Wei Wang, Bin Bi, Songfang Huang
Vision-language pre-training (VLP) on large-scale image-text pairs has recently witnessed rapid progress for learning cross-modal representations.
1 code implementation • ACL 2021 • Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si
Existing work in multilingual pretraining has demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.
no code implementations • 13 Oct 2020 • Fuli Luo, Pengcheng Yang, Shicheng Li, Xuancheng Ren, Xu sun
Pre-trained self-supervised models such as BERT have achieved striking success in learning sequence representations, especially for natural language processing.
no code implementations • 28 Sep 2020 • Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si
Recent studies about learning multilingual representations have achieved significant performance gains across a wide range of downstream cross-lingual tasks.
1 code implementation • ACL (RepL4NLP) 2021 • Damai Dai, Hua Zheng, Fuli Luo, Pengcheng Yang, Baobao Chang, Zhifang Sui
Conventional Knowledge Graph Completion (KGC) assumes that all test entities appear during training.
1 code implementation • IJCNLP 2019 • Fuli Luo, Shunyao Li, Pengcheng Yang, Lei LI, Baobao Chang, Zhifang Sui, Xu sun
It consists of a generator to produce pun sentences, and a discriminator to distinguish between the generated pun sentences and the real sentences with specific word senses.
no code implementations • ACL 2019 • Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, Zhifang Sui
To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing.
no code implementations • ACL 2019 • Pengcheng Yang, Lei LI, Fuli Luo, Tianyu Liu, Xu sun
Experiments show that with external commonsense knowledge and adversarial training, the generated essays are more novel, diverse, and topic-consistent than existing methods in terms of both automatic and human evaluation.
no code implementations • ACL 2019 • Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, Xu sun
Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges.
1 code implementation • ACL 2019 • Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei LI, Chengyang Huang, Xu sun
Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms.
1 code implementation • ACL 2019 • Fuli Luo, Peng Li, Pengcheng Yang, Jie zhou, Yutong Tan, Baobao Chang, Zhifang Sui, Xu sun
In this paper, we focus on the task of fine-grained text sentiment transfer (FGST).
no code implementations • ACL 2019 • Pengcheng Yang, Fuli Luo, Peng Chen, Tianyu Liu, Xu sun
The task of unsupervised bilingual lexicon induction (UBLI) aims to induce word translations from monolingual corpora in two languages.
1 code implementation • ACL 2019 • Pengcheng Yang, Fuli Luo, Shuming Ma, Junyang Lin, Xu sun
In this way, we can reduce the dependence of the model on the label order, as well as capture high-order correlations between labels.
1 code implementation • ACL 2019 • Chen Wu, Xuancheng Ren, Fuli Luo, Xu sun
Unsupervised text style transfer aims to alter text styles while preserving the content, without aligned data for supervision.
2 code implementations • 24 May 2019 • Fuli Luo, Peng Li, Jie zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, Xu sun
Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.
Ranked #1 on
Unsupervised Text Style Transfer
on GYAFC
1 code implementation • IJCAI 2019 2019 • Pengcheng Yang, Fuli Luo, Peng Chen, Lei LI, Zhiyi Yin, Xiaodong He, Xu sun
The visual storytelling (VST) task aims at generating a reasonable and coherent paragraph-level story with the image stream as input.
Ranked #21 on
Visual Storytelling
on VIST
no code implementations • 1 Nov 2018 • Pengcheng Yang, Fuli Luo, Shuangzhi Wu, Jingjing Xu, Dong-dong Zhang, Xu sun
In order to avoid such sophisticated alternate optimization, we propose to learn unsupervised word mapping by directly maximizing the mean discrepancy between the distribution of transferred embedding and target embedding.
no code implementations • EMNLP 2018 • Fuli Luo, Tianyu Liu, Zexue He, Qiaolin Xia, Zhifang Sui, Baobao Chang
The goal of Word Sense Disambiguation (WSD) is to identify the correct meaning of a word in the particular context.
1 code implementation • ACL 2018 • Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, Zhifang Sui
GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods.
Ranked #3 on
Word Sense Disambiguation
on SemEval 2015 Task 13