Search Results for author: Piji Li

Found 72 papers, 31 papers with code

融合提示学习的故事生成方法(A Story Generation Method Incorporating Prompt Learning)

no code implementations CCL 2022 Xuanfan Ni, Piji Li

“开放式自动故事生成通过输入故事的开头、大纲、主线等, 得到具有一致性、连贯性和逻辑性的故事。现有的方法想要提升生成故事的质量, 往往需要大量训练数据和更多参数的模型。针对以上问题, 该文利用提示学习在零样本与少样本场景下的优势, 同时使用外部常识推理知识, 提出了一种故事生成方法。该方法将故事生成分为三个阶段:输入故事的开头, 常识推理模型生成可能的事件;根据类型不同, 将事件填入问题模板中, 构建引导模型生成合理回答的问题;问答模型产生对应问题的答案, 并选择困惑度最小的作为故事下文。重复上述过程, 最终生成完整的故事。自动评测与人工评测指标表明, 与基线模型相比, 该文提出的方法能够生成更连贯、具体和合乎逻辑的故事。”

Story Generation

Slot Dependency Modeling for Zero-Shot Cross-Domain Dialogue State Tracking

no code implementations COLING 2022 Qingyue Wang, Yanan Cao, Piji Li, Yanhe Fu, Zheng Lin, Li Guo

Zero-shot learning for Dialogue State Tracking (DST) focuses on generalizing to an unseen domain without the expense of collecting in domain data.

Dialogue State Tracking Zero-Shot Learning

LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models

no code implementations28 Jun 2024 Renzhi Wang, Piji Li

Large language models (LLMs) require continual knowledge updates to stay abreast of the ever-changing world facts, prompting the formulation of lifelong model editing task.

Model Editing

MEMoE: Enhancing Model Editing with Mixture of Experts Adaptors

no code implementations29 May 2024 Renzhi Wang, Piji Li

Model editing aims to efficiently alter the behavior of Large Language Models (LLMs) within a desired scope, while ensuring no adverse impact on other inputs.

Model Editing

Semantic are Beacons: A Semantic Perspective for Unveiling Parameter-Efficient Fine-Tuning in Knowledge Learning

no code implementations28 May 2024 Renzhi Wang, Piji Li

Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of Large Language Models (LLMs) to various downstream applications.

Language Modelling Large Language Model

5W1H Extraction With Large Language Models

no code implementations25 May 2024 Yang Cao, Yangsong Lan, Feiyan Zhai, Piji Li

The extraction of essential news elements through the 5W1H framework (\textit{What}, \textit{When}, \textit{Where}, \textit{Why}, \textit{Who}, and \textit{How}) is critical for event extraction and text summarization.

Domain Adaptation Event Extraction +1

Language Reconstruction with Brain Predictive Coding from fMRI Data

no code implementations19 May 2024 Congchi Yin, Ziyi Ye, Piji Li

It consists of a main decoding network for language reconstruction and a side network for predictive coding.

A Systematic Evaluation of Large Language Models for Natural Language Generation Tasks

no code implementations16 May 2024 Xuanfan Ni, Piji Li

Recent efforts have evaluated large language models (LLMs) in areas such as commonsense reasoning, mathematical reasoning, and code generation.

Code Generation Dialogue Generation +2

XL$^2$Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies

no code implementations8 Apr 2024 Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Piji Li

However, prior benchmarks create datasets that ostensibly cater to long-text comprehension by expanding the input of traditional tasks, which falls short to exhibit the unique characteristics of long-text understanding, including long dependency tasks and longer text length compatible with modern LLMs' context window size.

Long-Context Understanding Reading Comprehension

Characteristic AI Agents via Large Language Models

1 code implementation19 Mar 2024 Xi Wang, Hongliang Dai, Shen Gao, Piji Li

In response to this research gap, we create a benchmark for the characteristic AI agents task, including dataset, techniques, and evaluation metrics.

Chatbot

DECIDER: A Dual-System Rule-Controllable Decoding Framework for Language Generation

no code implementations4 Mar 2024 Chen Xu, Tian Lan, Changlong Yu, Wei Wang, Jun Gao, Yu Ji, Qunxi Dong, Kun Qian, Piji Li, Wei Bi, Bin Hu

Constrained decoding approaches aim to control the meaning or style of text generated by a Pre-trained Language Model (PLM) using specific target words during inference.

Language Modelling Text Generation

An Empirical Investigation of Domain Adaptation Ability for Chinese Spelling Check Models

no code implementations26 Jan 2024 Xi Wang, Ruoqing Zhao, Hongliang Dai, Piji Li

Chinese Spelling Check (CSC) is a meaningful task in the area of Natural Language Processing (NLP) which aims at detecting spelling errors in Chinese texts and then correcting these errors.

Domain Adaptation Language Modelling +1

Medical Report Generation based on Segment-Enhanced Contrastive Representation Learning

no code implementations26 Dec 2023 Ruoqing Zhao, Xi Wang, Hongliang Dai, Pan Gao, Piji Li

Automated radiology report generation has the potential to improve radiology reporting and alleviate the workload of radiologists.

Contrastive Learning Image Segmentation +4

Punctuation Matters! Stealthy Backdoor Attack for Language Models

no code implementations26 Dec 2023 Xuan Sheng, Zhicheng Li, Zhaoyang Han, Xiangmao Chang, Piji Li

Meanwhile, we conduct automatic evaluation and human inspection, which indicate the proposed method possesses good performance of stealthiness without bringing grammatical issues and altering the meaning of sentences.

Backdoor Attack

Cross-Subject Data Splitting for Brain-to-Text Decoding

no code implementations18 Dec 2023 Congchi Yin, Qian Yu, Zhiwei Fang, Jie He, Changping Peng, Zhangang Lin, Jingping Shao, Piji Li

Such splitting method poses challenges to the utilization efficiency of dataset as well as the generalization of models.

Decoder EEG

Topic-Guided Self-Introduction Generation for Social Media Users

1 code implementation24 May 2023 Chunpu Xu, Jing Li, Piji Li, Min Yang

To allow users to better showcase themselves and network with others, we explore the auto-generation of social media self-introduction, a short sentence outlining a user's personal interests.

Decoder Sentence

Unified Text Structuralization with Instruction-tuned Language Models

no code implementations27 Mar 2023 Xuanfan Ni, Piji Li, Huayang Li

Text structuralization is one of the important fields of natural language processing (NLP) consists of information extraction (IE) and structure formalization.

Language Modelling Large Language Model

Ancient Chinese Word Segmentation and Part-of-Speech Tagging Using Distant Supervision

1 code implementation3 Mar 2023 Shuo Feng, Piji Li

To address this problem, we take advantage of the memorization effects of deep neural networks and a small amount of annotated data to get a model with much knowledge and a little noise, and then we use this model to relabel the ancient Chinese sentences in parallel corpus.

Chinese Word Segmentation Memorization +3

CTRLStruct: Dialogue Structure Learning for Open-Domain Response Generation

1 code implementation2 Mar 2023 Congchi Yin, Piji Li, Zhaochun Ren

Then we perform clustering to utterance-level representations and form topic-level clusters that can be considered as vertices in dialogue structure graph.

Contrastive Learning Dialogue Generation +4

Understanding Social Media Cross-Modality Discourse in Linguistic Space

1 code implementation26 Feb 2023 Chunpu Xu, Hanzhuo Tan, Jing Li, Piji Li

To fill in the gap, we present a novel concept of cross-modality discourse, reflecting how human readers couple image and text understandings.

16k

Feature-Level Debiased Natural Language Understanding

1 code implementation11 Dec 2022 Yougang Lyu, Piji Li, Yechang Yang, Maarten de Rijke, Pengjie Ren, Yukun Zhao, Dawei Yin, Zhaochun Ren

We also propose a dynamic negative sampling strategy to capture the dynamic influence of biases by employing a bias-only model to dynamically select the most similar biased negative samples.

Contrastive Learning Natural Language Understanding

A Survey on Backdoor Attack and Defense in Natural Language Processing

no code implementations22 Nov 2022 Xuan Sheng, Zhaoyang Han, Piji Li, Xiangmao Chang

Deep learning is becoming increasingly popular in real-life applications, especially in natural language processing (NLP).

Backdoor Attack

uChecker: Masked Pretrained Language Models as Unsupervised Chinese Spelling Checkers

no code implementations COLING 2022 Piji Li

The task of Chinese Spelling Check (CSC) is aiming to detect and correct spelling errors that can be found in the text.

Language Modelling Sentence

PromptAttack: Prompt-based Attack for Language Models via Gradient Search

no code implementations5 Sep 2022 Yundi Shi, Piji Li, Changchun Yin, Zhaoyang Han, Lu Zhou, Zhe Liu

Therefore, in this paper, we propose a malicious prompt template construction method (\textbf{PromptAttack}) to probe the security performance of PLMs.

Effidit: Your AI Writing Assistant

no code implementations3 Aug 2022 Shuming Shi, Enbo Zhao, Duyu Tang, Yan Wang, Piji Li, Wei Bi, Haiyun Jiang, Guoping Huang, Leyang Cui, Xinting Huang, Cong Zhou, Yong Dai, Dongyang Ma

In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME).

Keywords to Sentences Retrieval +3

COSPLAY: Concept Set Guided Personalized Dialogue Generation Across Both Party Personas

1 code implementation2 May 2022 Chen Xu, Piji Li, Wei Wang, Haoran Yang, Siyun Wang, Chuangbai Xiao

In this work, we propose COSPLAY(COncept Set guided PersonaLized dialogue generation Across both partY personas) that considers both parties as a "team": expressing self-persona while keeping curiosity toward the partner, leading responses around mutual personas, and finding the common ground.

Dialogue Generation

Event Transition Planning for Open-ended Text Generation

1 code implementation Findings (ACL) 2022 Qintong Li, Piji Li, Wei Bi, Zhaochun Ren, Yuxuan Lai, Lingpeng Kong

Open-ended text generation tasks, such as dialogue generation and story completion, require models to generate a coherent continuation given limited preceding context.

Dialogue Generation Diversity +1

Parameter-Efficient Tuning by Manipulating Hidden States of Pretrained Language Models For Classification Tasks

no code implementations10 Apr 2022 Haoran Yang, Piji Li, Wai Lam

Continuous prompt tuning which prepends a few trainable vectors to the embeddings of input is one of these methods and has drawn much attention due to its effectiveness and efficiency.

Contrastive Representation Learning for Exemplar-Guided Paraphrase Generation

1 code implementation Findings (EMNLP) 2021 Haoran Yang, Wai Lam, Piji Li

Exemplar-Guided Paraphrase Generation (EGPG) aims to generate a target sentence which conforms to the style of the given exemplar while encapsulating the content information of the source sentence.

Contrastive Learning Decoder +5

Sentence Semantic Regression for Text Generation

no code implementations6 Aug 2021 Wei Wang, Piji Li, Hai-Tao Zheng

In the phase of surface realization, a mixed-granularity sentence decoder is designed to generate text with better consistency by jointly incorporating the predicted sentence-level main idea as well as the preceding contextual token-level information.

Decoder Dialogue Generation +3

Dialogue Summarization with Supporting Utterance Flow Modeling and Fact Regularization

1 code implementation3 Aug 2021 Wang Chen, Piji Li, Hou Pong Chan, Irwin King

The supporting utterance flow modeling helps to generate a coherent summary by smoothly shifting the focus from the former utterances to the later ones.

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

1 code implementation ACL 2021 Dong Wang, Ning Ding, Piji Li, Hai-Tao Zheng

Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics.

Contrastive Learning Natural Language Understanding +3

Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese Grammatical Error Correction

1 code implementation ACL 2021 Piji Li, Shuming Shi

We investigate the problem of Chinese Grammatical Error Correction (CGEC) and present a new framework named Tail-to-Tail (\textbf{TtT}) non-autoregressive sequence prediction to address the deep issues hidden in CGEC.

Grammatical Error Correction Sentence

Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection

no code implementations13 Feb 2021 Wei Wang, Piji Li, Hai-Tao Zheng

Automatic comment generation is a special and challenging task to verify the model ability on news content comprehension and language generation.

Clustering Comment Generation +4

Abstractive Opinion Tagging

1 code implementation18 Jan 2021 Qintong Li, Piji Li, Xinyi Li, Zhaochun Ren, Zhumin Chen, Maarten de Rijke

In this paper, we propose the abstractive opinion tagging task, where systems have to automatically generate a ranked list of opinion tags that are based on, but need not occur in, a given set of user-generated reviews.

Sentence

Predicting Events in MOBA Games: Prediction, Attribution, and Evaluation

no code implementations17 Dec 2020 Zelong Yang, Yan Wang, Piji Li, Shaobin Lin, Shuming Shi, Shao-Lun Huang, Wei Bi

The multiplayer online battle arena (MOBA) games have become increasingly popular in recent years.

Consistency and Coherency Enhanced Story Generation

no code implementations17 Oct 2020 Wei Wang, Piji Li, Hai-Tao Zheng

In terms of consistency, on one hand, GPT2 cannot guarantee the consistency of the plots explicitly.

Language Modelling Story Generation

Knowledge Bridging for Empathetic Dialogue Generation

1 code implementation21 Sep 2020 Qintong Li, Piji Li, Zhaochun Ren, Pengjie Ren, Zhumin Chen

Finally, to generate the empathetic response, we propose an emotional cross-attention mechanism to learn the emotional dependencies from the emotional context graph.

Dialogue Generation

Enhancing Dialogue Generation via Multi-Level Contrastive Learning

no code implementations19 Sep 2020 Xin Li, Piji Li, Yan Wang, Xiaojiang Liu, Wai Lam

Most of the existing works for dialogue generation are data-driven models trained directly on corpora crawled from websites.

Contrastive Learning Dialogue Generation +1

Exclusive Hierarchical Decoding for Deep Keyphrase Generation

1 code implementation ACL 2020 Wang Chen, Hou Pong Chan, Piji Li, Irwin King

A new setting is recently introduced into this problem, in which, given a document, the model needs to predict a set of keyphrases and simultaneously determine the appropriate number of keyphrases to produce.

Diversity Keyphrase Generation

Salience Estimation with Multi-Attention Learning for Abstractive Text Summarization

no code implementations7 Apr 2020 Piji Li, Lidong Bing, Zhongyu Wei, Wai Lam

Different from neural machine translation, in the task of text summarization, salience estimation for words, phrases or sentences is a critical component, since the output summary is a distillation of the input text.

Abstractive Text Summarization Decoder +2

Storytelling from an Image Stream Using Scene Graphs

no code implementations The Thirty-Fourth AAAI Conference on Artificial Intelligence 2020 Ruize Wang, Zhongyu Wei, Piji Li, Qi Zhang, Xuanjing Huang

In particular, on the within-image level, we employ a Graph Convolution Network (GCN) to enrich local fine-grained region representations of objects on scene graphs.

Visual Storytelling

An Empirical Investigation of Pre-Trained Transformer Language Models for Open-Domain Dialogue Generation

1 code implementation9 Mar 2020 Piji Li

A weighted joint prediction paradigm for both context and response is designed to evaluate the performance of models with or without the loss term for context prediction.

Dialogue Generation Diversity

A Neural Topical Expansion Framework for Unstructured Persona-oriented Dialogue Generation

2 code implementations6 Feb 2020 Minghong Xu, Piji Li, Haoran Yang, Pengjie Ren, Zhaochun Ren, Zhumin Chen, Jun Ma

To address this, we propose a neural topical expansion framework, namely Persona Exploration and Exploitation (PEE), which is able to extend the predefined user persona description with semantically correlated content before utilizing them to generate dialogue responses.

Descriptive Dialogue Generation

Relevance-Promoting Language Model for Short-Text Conversation

no code implementations26 Nov 2019 Xin Li, Piji Li, Wei Bi, Xiaojiang Liu, Wai Lam

In this paper, we propose to formulate the STC task as a language modeling problem and tailor-make a training strategy to adapt a language model for response generation.

Diversity Language Modelling +2

Semi-supervised Text Style Transfer: Cross Projection in Latent Space

no code implementations IJCNLP 2019 Mingyue Shang, Piji Li, Zhenxin Fu, Lidong Bing, Dongyan Zhao, Shuming Shi, Rui Yan

Text style transfer task requires the model to transfer a sentence of one style to another style while retaining its original content meaning, which is a challenging problem that has long suffered from the shortage of parallel data.

Sentence Style Transfer +1

Tackling Long-Tailed Relations and Uncommon Entities in Knowledge Graph Completion

1 code implementation IJCNLP 2019 Zihao Wang, Kwun Ping Lai, Piji Li, Lidong Bing, Wai Lam

Therefore, we propose a meta-learning framework that aims at handling infrequent relations with few-shot learning and uncommon entities by using textual descriptions.

Few-Shot Learning

How to Write Summaries with Patterns? Learning towards Abstractive Summarization through Prototype Editing

1 code implementation IJCNLP 2019 Shen Gao, Xiuying Chen, Piji Li, Zhangming Chan, Dongyan Zhao, Rui Yan

There are two main challenges in this task: (1) the model needs to incorporate learned patterns from the prototype, but (2) should avoid copying contents other than the patternized words---such as irrelevant facts---into the generated summaries.

Abstractive Text Summarization

Interconnected Question Generation with Coreference Alignment and Conversation Flow Modeling

1 code implementation ACL 2019 Yifan Gao, Piji Li, Irwin King, Michael R. Lyu

The coreference alignment modeling explicitly aligns coreferent mentions in conversation history with corresponding pronominal references in generated questions, which makes generated questions interconnected to conversation history.

Question Answering Question Generation +2

An Integrated Approach for Keyphrase Generation via Exploring the Power of Retrieval and Extraction

1 code implementation NAACL 2019 Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, Irwin King

For further exploiting the power of extraction and retrieval, we propose a neural-based merging module to combine and re-rank the predicted keyphrases from the enhanced generative model, the extractive model, and the retrieved keyphrases.

Keyphrase Generation Multi-Task Learning +1

Persona-Aware Tips Generation

no code implementations6 Mar 2019 Piji Li, ZiHao Wang, Lidong Bing, Wai Lam

In order to exploit the persona information, we propose a framework based on adversarial variational auto-encoders (aVAE) for persona modeling from the historical tips and reviews of users and items.

Abstractive Text Summarization by Incorporating Reader Comments

no code implementations13 Dec 2018 Shen Gao, Xiuying Chen, Piji Li, Zhaochun Ren, Lidong Bing, Dongyan Zhao, Rui Yan

To tackle this problem, we propose the task of reader-aware abstractive summary generation, which utilizes the reader comments to help the model produce better summary about the main aspect.

Reader-Aware Summarization

QuaSE: Sequence Editing under Quantifiable Guidance

1 code implementation EMNLP 2018 Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang

For example, an input sequence could be a word sequence, such as review sentence and advertisement text.

Disentanglement Sentence +1

Generating Distractors for Reading Comprehension Questions from Real Examinations

2 code implementations8 Sep 2018 Yifan Gao, Lidong Bing, Piji Li, Irwin King, Michael R. Lyu

We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations.

Decoder Distractor Generation +3

Aspect Term Extraction with History Attention and Selective Transformation

1 code implementation2 May 2018 Xin Li, Lidong Bing, Piji Li, Wai Lam, Zhimou Yang

Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2

Actor-Critic based Training Framework for Abstractive Summarization

no code implementations28 Mar 2018 Piji Li, Lidong Bing, Wai Lam

For the critic, we combine the maximum likelihood estimator with a well designed global summary quality estimator which is a neural network based binary classifier aiming to make the generated summaries indistinguishable from the human-written ones.

Abstractive Text Summarization

Deep Recurrent Generative Decoder for Abstractive Text Summarization

1 code implementation EMNLP 2017 Piji Li, Wai Lam, Lidong Bing, ZiHao Wang

We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN).

Abstractive Text Summarization Decoder +1

Neural Rating Regression with Abstractive Tips Generation for Recommendation

no code implementations1 Aug 2017 Piji Li, ZiHao Wang, Zhaochun Ren, Lidong Bing, Wai Lam

In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings.

regression Sentence

Abstractive Multi-Document Summarization via Phrase Selection and Merging

no code implementations IJCNLP 2015 Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, Rebecca J. Passonneau

We propose an abstraction-based multi-document summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases.

Document Summarization Multi-Document Summarization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.