Search Results for author: Haiyun Jiang

Found 13 papers, 2 papers with code

Learning from Sibling Mentions with Scalable Graph Inference in Fine-Grained Entity Typing

no code implementations ACL 2022 Yi Chen, Jiayang Cheng, Haiyun Jiang, Lemao Liu, Haisong Zhang, Shuming Shi, Ruifeng Xu

In this paper, we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts, which consequently limits their overall typing performance.

Entity Typing

Fine-grained Entity Typing without Knowledge Base

1 code implementation EMNLP 2021 Jing Qian, Yibin Liu, Lemao Liu, Yangming Li, Haiyun Jiang, Haisong Zhang, Shuming Shi

Existing work on Fine-grained Entity Typing (FET) typically trains automatic models on the datasets obtained by using Knowledge Bases (KB) as distant supervision.

Entity Typing Named Entity Recognition +1

An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing

no code implementations EMNLP 2021 Yi Chen, Haiyun Jiang, Lemao Liu, Shuming Shi, Chuang Fan, Min Yang, Ruifeng Xu

Auxiliary information from multiple sources has been demonstrated to be effective in zero-shot fine-grained entity typing (ZFET).

Entity Typing

Context Enhanced Short Text Matching using Clickthrough Data

no code implementations3 Mar 2022 Mao Yan Chen, Haiyun Jiang, Yujiu Yang

The short text matching task employs a model to determine whether two short texts have the same semantic meaning or intent.

Text Matching

Revisiting the Evaluation Metrics of Paraphrase Generation

no code implementations17 Feb 2022 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

(2) reference-free metrics outperform reference-based metrics, indicating that the standard references are unnecessary to evaluate the paraphrase's quality.

Machine Translation Paraphrase Generation

Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing

no code implementations9 Jan 2022 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

It has been shown that natural language processing (NLP) models are vulnerable to a kind of security threat called the Backdoor Attack, which utilizes a `backdoor trigger' paradigm to mislead the models.

Backdoor Attack Text Classification

A Question-answering Based Framework for Relation Extraction Validation

no code implementations7 Apr 2021 Jiayang Cheng, Haiyun Jiang, Deqing Yang, Yanghua Xiao

However, few works have focused on how to validate and correct the results generated by the existing relation extraction models.

Question Answering Relation Extraction

TexSmart: A Text Understanding System for Fine-Grained NER and Enhanced Semantic Analysis

no code implementations31 Dec 2020 Haisong Zhang, Lemao Liu, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, Xiao Feng, Tao Chen, Tao Yang, Dong Yu, Feng Zhang, Zhanhui Kang, Shuming Shi

This technique report introduces TexSmart, a text understanding system that supports fine-grained named entity recognition (NER) and enhanced semantic analysis functionalities.

Named Entity Recognition NER

Complex Relation Extraction: Challenges and Opportunities

no code implementations9 Dec 2020 Haiyun Jiang, Qiaoben Bao, Qiao Cheng, Deqing Yang, Li Wang, Yanghua Xiao

In recent years, many complex relation extraction tasks, i. e., the variants of simple binary relation extraction, are proposed to meet the complex applications in practice.

Binary Relation Extraction

Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation

no code implementations ACL 2019 Jiangjie Chen, Ao Wang, Haiyun Jiang, Suo Feng, Chenguang Li, Yanghua Xiao

A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity.

Knowledge Graphs

Deep Short Text Classification with Knowledge Powered Attention

1 code implementation21 Feb 2019 Jindong Chen, Yizhou Hu, Jingping Liu, Yanghua Xiao, Haiyun Jiang

For the purpose of measuring the importance of knowledge, we introduce attention mechanisms and propose deep Short Text Classification with Knowledge powered Attention (STCKA).

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.