Search Results for author: Haiyun Jiang

Found 29 papers, 8 papers with code

An Empirical Study on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing

no code implementations EMNLP 2021 Yi Chen, Haiyun Jiang, Lemao Liu, Shuming Shi, Chuang Fan, Min Yang, Ruifeng Xu

Auxiliary information from multiple sources has been demonstrated to be effective in zero-shot fine-grained entity typing (ZFET).

Entity Typing

Learning from Sibling Mentions with Scalable Graph Inference in Fine-Grained Entity Typing

no code implementations ACL 2022 Yi Chen, Jiayang Cheng, Haiyun Jiang, Lemao Liu, Haisong Zhang, Shuming Shi, Ruifeng Xu

In this paper, we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts, which consequently limits their overall typing performance.

Entity Typing

Fine-grained Entity Typing without Knowledge Base

1 code implementation EMNLP 2021 Jing Qian, Yibin Liu, Lemao Liu, Yangming Li, Haiyun Jiang, Haisong Zhang, Shuming Shi

Existing work on Fine-grained Entity Typing (FET) typically trains automatic models on the datasets obtained by using Knowledge Bases (KB) as distant supervision.

Entity Typing named-entity-recognition +2

StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving

no code implementations15 Nov 2023 Chang Gao, Haiyun Jiang, Deng Cai, Shuming Shi, Wai Lam

Most existing chain-of-thought (CoT) prompting methods suffer from the issues of generalizability and consistency, as they often rely on instance-specific solutions that may not be applicable to other cases and lack task-level consistency in their reasoning steps.

Hint-enhanced In-Context Learning wakes Large Language Models up for knowledge-intensive tasks

no code implementations3 Nov 2023 Yifan Wang, Qingyan Guo, Xinzhe Ni, Chufan Shi, Lemao Liu, Haiyun Jiang, Yujiu Yang

In-context learning (ICL) ability has emerged with the increasing scale of large language models (LLMs), enabling them to learn input-label mappings from demonstrations and perform well on downstream tasks.

Open-Domain Question Answering

A Benchmark for Text Expansion: Datasets, Metrics, and Baselines

no code implementations17 Sep 2023 Yi Chen, Haiyun Jiang, Wei Bi, Rui Wang, Longyue Wang, Shuming Shi, Ruifeng Xu

This work presents a new task of Text Expansion (TE), which aims to insert fine-grained modifiers into proper locations of the plain text to concretize or vivify human writings.

Informativeness Text Infilling

Towards Visual Taxonomy Expansion

1 code implementation12 Sep 2023 Tinghui Zhu, Jingping Liu, Jiaqing Liang, Haiyun Jiang, Yanghua Xiao, ZongYu Wang, Rui Xie, Yunsen Xian

Specifically, on the Chinese taxonomy dataset, our method significantly improves accuracy by 8. 75 %.

Taxonomy Expansion

Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling

no code implementations16 Jul 2023 Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu

Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP).

Language Modelling

Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model

no code implementations4 Jun 2023 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

Sentence embedding is one of the most fundamental tasks in Natural Language Processing and plays an important role in various tasks.

Language Modelling Sentence Embedding +1

A Simple and Plug-and-play Method for Unsupervised Sentence Representation Enhancement

no code implementations13 May 2023 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

Generating proper embedding of sentences through an unsupervised way is beneficial to semantic matching and retrieval problems in real-world scenarios.

Retrieval Sentence Embedding +1

Frequency-aware Dimension Selection for Static Word Embedding by Mixed Product Distance

no code implementations13 May 2023 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Ying Chen

Static word embedding is still useful, particularly for context-unavailable tasks, because in the case of no context available, pre-trained language models often perform worse than static word embeddings.

Word Embeddings

Zero-Shot Rumor Detection with Propagation Structure via Prompt Learning

1 code implementation2 Dec 2022 Hongzhan Lin, Pengyao Yi, Jing Ma, Haiyun Jiang, Ziyang Luo, Shuming Shi, Ruifang Liu

The spread of rumors along with breaking events seriously hinders the truth in the era of social media.

Domain Adaptation

Effidit: Your AI Writing Assistant

no code implementations3 Aug 2022 Shuming Shi, Enbo Zhao, Duyu Tang, Yan Wang, Piji Li, Wei Bi, Haiyun Jiang, Guoping Huang, Leyang Cui, Xinting Huang, Cong Zhou, Yong Dai, Dongyang Ma

In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME).

Keywords to Sentences Retrieval +2

FL-Tuning: Layer Tuning for Feed-Forward Network in Transformer

1 code implementation30 Jun 2022 Jingping Liu, Yuqiu Song, Kui Xue, Hongli Sun, Chao Wang, Lihan Chen, Haiyun Jiang, Jiaqing Liang, Tong Ruan

Specifically, we focus on layer tuning for feed-forward network in the Transformer, namely FL-tuning.

Model Optimization

Context Enhanced Short Text Matching using Clickthrough Data

no code implementations3 Mar 2022 Mao Yan Chen, Haiyun Jiang, Yujiu Yang

The short text matching task employs a model to determine whether two short texts have the same semantic meaning or intent.

Text Matching

On the Evaluation Metrics for Paraphrase Generation

1 code implementation17 Feb 2022 Lingfeng Shen, Lemao Liu, Haiyun Jiang, Shuming Shi

In this paper we revisit automatic metrics for paraphrase evaluation and obtain two findings that disobey conventional wisdom: (1) Reference-free metrics achieve better performance than their reference-based counterparts.

Machine Translation Paraphrase Generation

Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing

no code implementations9 Jan 2022 Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi

It has been shown that natural language processing (NLP) models are vulnerable to a kind of security threat called the Backdoor Attack, which utilizes a `backdoor trigger' paradigm to mislead the models.

Backdoor Attack Text Classification

A Question-answering Based Framework for Relation Extraction Validation

no code implementations7 Apr 2021 Jiayang Cheng, Haiyun Jiang, Deqing Yang, Yanghua Xiao

However, few works have focused on how to validate and correct the results generated by the existing relation extraction models.

Question Answering Relation Extraction

Complex Relation Extraction: Challenges and Opportunities

no code implementations9 Dec 2020 Haiyun Jiang, Qiaoben Bao, Qiao Cheng, Deqing Yang, Li Wang, Yanghua Xiao

In recent years, many complex relation extraction tasks, i. e., the variants of simple binary relation extraction, are proposed to meet the complex applications in practice.

Binary Relation Extraction

Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation

no code implementations ACL 2019 Jiangjie Chen, Ao Wang, Haiyun Jiang, Suo Feng, Chenguang Li, Yanghua Xiao

A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity.

Knowledge Graphs

Deep Short Text Classification with Knowledge Powered Attention

1 code implementation21 Feb 2019 Jindong Chen, Yizhou Hu, Jingping Liu, Yanghua Xiao, Haiyun Jiang

For the purpose of measuring the importance of knowledge, we introduce attention mechanisms and propose deep Short Text Classification with Knowledge powered Attention (STCKA).

General Classification text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.