Search Results for author: Linlin Li

Found 15 papers, 3 papers with code

DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling

no code implementations18 Sep 2021 Baojun Wang, Zhao Zhang, Kun Xu, Guang-Yuan Hao, Yuyang Zhang, Lifeng Shang, Linlin Li, Xiao Chen, Xin Jiang, Qun Liu

Incorporating lexical knowledge into deep learning models has been proved to be very effective for sequence labeling tasks.

Denoising

LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation

no code implementations11 Mar 2021 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

The multilingual pre-trained language models (e. g, mBERT, XLM and XLM-R) have shown impressive performance on cross-lingual natural language understanding tasks.

Natural Language Understanding

Application of the unified control and detection framework to detecting stealthy integrity cyber-attacks on feedback control systems

no code implementations27 Feb 2021 Steven X. Ding, Linlin Li, Dong Zhao, Chris Louen, Tianyu Liu

It is demonstrated, in the unified framework of control and detection, that all kernel attacks can be structurally detected when not only the observer-based residual, but also the control signal based residual signals are generated and used for the detection purpose.

Improving Task-Agnostic BERT Distillation with Layer Mapping Search

no code implementations11 Dec 2020 Xiaoqi Jiao, Huating Chang, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

Comprehensive experiments on the evaluation benchmarks demonstrate that 1) layer mapping strategy has a significant effect on task-agnostic BERT distillation and different layer mappings can result in quite different performances; 2) the optimal layer mapping strategy from the proposed search process consistently outperforms the other heuristic ones; 3) with the optimal layer mapping, our student model achieves state-of-the-art performance on the GLUE tasks.

Knowledge Distillation

TinyBERT: Distilling BERT for Natural Language Understanding

5 code implementations Findings of the Association for Computational Linguistics 2020 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.

Knowledge Distillation Language Modelling +6

Neural Chinese Address Parsing

no code implementations NAACL 2019 Hao Li, Wei Lu, Pengjun Xie, Linlin Li

This paper introduces a new task {--} Chinese address parsing {--} the task of mapping Chinese addresses into semantically meaningful chunks.

Structured Prediction

Better Modeling of Incomplete Annotations for Named Entity Recognition

no code implementations NAACL 2019 Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, Linlin Li

Supervised approaches to named entity recognition (NER) are largely developed based on the assumption that the training data is fully annotated with named entity information.

Named Entity Recognition NER

A Hybrid System for Chinese Grammatical Error Diagnosis and Correction

no code implementations WS 2018 Chen Li, Junpei Zhou, Zuyi Bao, Hengyou Liu, Guangwei Xu, Linlin Li

In the correction stage, candidates were generated by the three GEC models and then merged to output the final corrections for M and S types.

Grammatical Error Correction

Cannot find the paper you are looking for? You can Submit a new open access paper.