Search Results for author: Yiren Chen

Found 9 papers, 5 papers with code

Parameter-efficient Continual Learning Framework in Industrial Real-time Text Classification System

no code implementations NAACL (ACL) 2022 Tao Zhu, Zhe Zhao, Weijie Liu, Jiachi Liu, Yiren Chen, Weiquan Mao, Haoyan Liu, Kunbo Ding, Yudong Li, Xuefeng Yang

Catastrophic forgetting is a challenge for model deployment in industrial real-time systems, which requires the model to quickly master a new task without forgetting the old one.

Continual Learning text-classification +1

Create and Find Flatness: Building Flat Training Spaces in Advance for Continual Learning

1 code implementation20 Sep 2023 Wenhang Shi, Yiren Chen, Zhe Zhao, Wei Lu, Kimmo Yan, Xiaoyong Du

Therefore, we shift the attention to the current task learning stage, presenting a novel framework, C&F (Create and Find Flatness), which builds a flat training space for each task in advance.

Continual Learning

TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities

3 code implementations13 Dec 2022 Zhe Zhao, Yudong Li, Cheng Hou, Jing Zhao, Rong Tian, Weijie Liu, Yiren Chen, Ningyuan Sun, Haoyan Liu, Weiquan Mao, Han Guo, Weigang Guo, Taiqiang Wu, Tao Zhu, Wenhang Shi, Chen Chen, Shan Huang, Sihong Chen, Liqun Liu, Feifei Li, Xiaoshuai Chen, Xingwu Sun, Zhanhui Kang, Xiaoyong Du, Linlin Shen, Kimmo Yan

The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework.

A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning

1 code implementation COLING 2022 Kunbo Ding, Weijie Liu, Yuejian Fang, Weiquan Mao, Zhe Zhao, Tao Zhu, Haoyan Liu, Rong Tian, Yiren Chen

Existing zero-shot cross-lingual transfer methods rely on parallel corpora or bilingual dictionaries, which are expensive and impractical for low-resource languages.

text-classification Text Classification +3

Bridging the Gap Between Clean Data Training and Real-World Inference for Spoken Language Understanding

no code implementations13 Apr 2021 Di wu, Yiren Chen, Liang Ding, DaCheng Tao

Spoken language understanding (SLU) system usually consists of various pipeline components, where each component heavily relies on the results of its upstream ones.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +7

Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees

1 code implementation EACL 2021 Jiangang Bai, Yujing Wang, Yiren Chen, Yaming Yang, Jing Bai, Jing Yu, Yunhai Tong

Pre-trained language models like BERT achieve superior performances in various NLP tasks without explicit consideration of syntactic information.

Natural Language Understanding

Improving BERT with Self-Supervised Attention

1 code implementation8 Apr 2020 Yiren Chen, Xiaoyu Kou, Jiangang Bai, Yunhai Tong

One of the most popular paradigms of applying large pre-trained NLP models such as BERT is to fine-tune it on a smaller dataset.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.