Search Results for author: Lemao Liu

Found 50 papers, 9 papers with code

Learning from Sibling Mentions with Scalable Graph Inference in Fine-Grained Entity Typing

no code implementations ACL 2022 Yi Chen, Jiayang Cheng, Haiyun Jiang, Lemao Liu, Haisong Zhang, Shuming Shi, Ruifeng Xu

In this paper, we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts, which consequently limits their overall typing performance.

Entity Typing

Fine-grained Entity Typing without Knowledge Base

1 code implementation EMNLP 2021 Jing Qian, Yibin Liu, Lemao Liu, Yangming Li, Haiyun Jiang, Haisong Zhang, Shuming Shi

Existing work on Fine-grained Entity Typing (FET) typically trains automatic models on the datasets obtained by using Knowledge Bases (KB) as distant supervision.

Entity Typing Named Entity Recognition +1

An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing

no code implementations EMNLP 2021 Yi Chen, Haiyun Jiang, Lemao Liu, Shuming Shi, Chuang Fan, Min Yang, Ruifeng Xu

Auxiliary information from multiple sources has been demonstrated to be effective in zero-shot fine-grained entity typing (ZFET).

Entity Typing

On the Relationship between Neural Machine Translation and Word Alignment

no code implementations Xintong Li, Lemao Liu, Guanlin Li, Max Meng, Shuming Shi

We find that although NMT models are difficult to capture word alignment for CFT words but these words do not sacrifice translation quality significantly, which provides an explanation why NMT is more successful for translation yet worse for word alignment compared to statistical machine translation.

Machine Translation Translation +1

Efficient Sub-structured Knowledge Distillation

1 code implementation9 Mar 2022 Wenye Lin, Yangming Li, Lemao Liu, Shuming Shi, Hai-Tao Zheng

Specifically, we transfer the knowledge from a teacher model to its student model by locally matching their predictions on all sub-structures, instead of the whole output space.

Knowledge Distillation Structured Prediction

Revisiting the Evaluation Metrics of Paraphrase Generation

no code implementations17 Feb 2022 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

(2) reference-free metrics outperform reference-based metrics, indicating that the standard references are unnecessary to evaluate the paraphrase's quality.

Machine Translation Paraphrase Generation

A Survey on Retrieval-Augmented Text Generation

no code implementations2 Feb 2022 Huayang Li, Yixuan Su, Deng Cai, Yan Wang, Lemao Liu

Recently, retrieval-augmented text generation attracted increasing attention of the computational linguistics community.

Machine Translation Response Generation +2

Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing

no code implementations9 Jan 2022 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

It has been shown that natural language processing (NLP) models are vulnerable to a kind of security threat called the Backdoor Attack, which utilizes a `backdoor trigger' paradigm to mislead the models.

Backdoor Attack Text Classification

Injecting Numerical Reasoning Skills into Knowledge Base Question Answering Models

1 code implementation12 Dec 2021 Yu Feng, Jing Zhang, Xiaokang Zhang, Lemao Liu, Cuiping Li, Hong Chen

Embedding-based methods are popular for Knowledge Base Question Answering (KBQA), but few current models have numerical reasoning skills and thus struggle to answer ordinal constrained questions.

Data Augmentation Knowledge Base Question Answering

GWLAN: General Word-Level AutocompletioN for Computer-Aided Translation

no code implementations ACL 2021 Huayang Li, Lemao Liu, Guoping Huang, Shuming Shi

In this paper, we propose the task of general word-level autocompletion (GWLAN) from a real-world CAT scenario, and construct the first public benchmark to facilitate research in this topic.

Translation

Assessing Dialogue Systems with Distribution Distances

1 code implementation Findings (ACL) 2021 Jiannan Xiang, Yahui Liu, Deng Cai, Huayang Li, Defu Lian, Lemao Liu

An important aspect of developing dialogue systems is how to evaluate and compare the performance of different systems.

Dialogue Evaluation

TexSmart: A Text Understanding System for Fine-Grained NER and Enhanced Semantic Analysis

no code implementations31 Dec 2020 Haisong Zhang, Lemao Liu, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, Xiao Feng, Tao Chen, Tao Yang, Dong Yu, Feng Zhang, Zhanhui Kang, Shuming Shi

This technique report introduces TexSmart, a text understanding system that supports fine-grained named entity recognition (NER) and enhanced semantic analysis functionalities.

Named Entity Recognition NER

Empirical Analysis of Unlabeled Entity Problem in Named Entity Recognition

1 code implementation ICLR 2021 Yangming Li, Lemao Liu, Shuming Shi

Experiments on synthetic datasets and real-world datasets show that our model is robust to unlabeled entity problem and surpasses prior baselines.

Named Entity Recognition NER

On the Branching Bias of Syntax Extracted from Pre-trained Language Models

no code implementations Findings of the Association for Computational Linguistics 2020 Huayang Li, Lemao Liu, Guoping Huang, Shuming Shi

Many efforts have been devoted to extracting constituency trees from pre-trained language models, often proceeding in two stages: feature definition and parsing.

Evaluating Explanation Methods for Neural Machine Translation

no code implementations ACL 2020 Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang, Shuming Shi

Recently many efforts have been devoted to interpreting the black-box NMT models, but little progress has been made on metrics to evaluate explanation methods.

Machine Translation Translation +1

Understanding Learning Dynamics for Neural Machine Translation

no code implementations5 Apr 2020 Conghui Zhu, Guanlin Li, Lemao Liu, Tiejun Zhao, Shuming Shi

Despite the great success of NMT, there still remains a severe challenge: it is hard to interpret the internal dynamics during its training process.

Machine Translation Translation

Regularized Context Gates on Transformer for Machine Translation

no code implementations ACL 2020 Xintong Li, Lemao Liu, Rui Wang, Guoping Huang, Max Meng

This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer.

Machine Translation Translation

Neural Machine Translation with Noisy Lexical Constraints

no code implementations13 Aug 2019 Huayang Li, Guoping Huang, Deng Cai, Lemao Liu

Experiments show that our approach can indeed improve the translation quality with the automatically generated constraints.

Machine Translation Translation

On the Word Alignment from Neural Machine Translation

no code implementations ACL 2019 Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi

Prior researches suggest that neural machine translation (NMT) captures word alignment through its attention mechanism, however, this paper finds attention may almost fail to capture word alignment for some NMT models.

Machine Translation Translation +1

Target Foresight Based Attention for Neural Machine Translation

no code implementations NAACL 2018 Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, Max Meng

In neural machine translation, an attention model is used to identify the aligned source words for a target word (target foresight word) in order to select translation context, but it does not make use of any information of this target foresight word at all.

Language Modelling Machine Translation +1

Neural Machine Translation with Supervised Attention

no code implementations COLING 2016 Lemao Liu, Masao Utiyama, Andrew Finch, Eiichiro Sumita

The attention mechanisim is appealing for neural machine translation, since it is able to dynam- ically encode a source sentence by generating a alignment between a target word and source words.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.