Search Results for author: Pei Ke

Found 30 papers, 26 papers with code

A Large-Scale Chinese Short-Text Conversation Dataset

2 code implementations10 Aug 2020 Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, Minlie Huang

The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling.

Dialogue Generation Short-Text Conversation

CPM-2: Large-scale Cost-effective Pre-trained Language Models

2 code implementations20 Jun 2021 Zhengyan Zhang, Yuxian Gu, Xu Han, Shengqi Chen, Chaojun Xiao, Zhenbo Sun, Yuan YAO, Fanchao Qi, Jian Guan, Pei Ke, Yanzheng Cai, Guoyang Zeng, Zhixing Tan, Zhiyuan Liu, Minlie Huang, Wentao Han, Yang Liu, Xiaoyan Zhu, Maosong Sun

We present a suite of cost-effective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference.

EVA: An Open-Domain Chinese Dialogue System with Large-Scale Generative Pre-Training

2 code implementations3 Aug 2021 Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang

Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.

Black-Box Prompt Optimization: Aligning Large Language Models without Model Training

1 code implementation7 Nov 2023 Jiale Cheng, Xiao Liu, Kehan Zheng, Pei Ke, Hongning Wang, Yuxiao Dong, Jie Tang, Minlie Huang

However, these models are often not well aligned with human intents, which calls for additional treatments on them, that is, the alignment problem.

CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation

2 code implementations30 Nov 2023 Pei Ke, Bosi Wen, Zhuoer Feng, Xiao Liu, Xuanyu Lei, Jiale Cheng, Shengyuan Wang, Aohan Zeng, Yuxiao Dong, Hongning Wang, Jie Tang, Minlie Huang

Since the natural language processing (NLP) community started to make large language models (LLMs), such as GPT-4, act as a critic to evaluate the quality of generated texts, most of them only train a critique generation model of a specific scale on specific datasets.

Language Modelling Large Language Model

CoTK: An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation

1 code implementation3 Feb 2020 Fei Huang, Dazhen Wan, Zhihong Shao, Pei Ke, Jian Guan, Yilin Niu, Xiaoyan Zhu, Minlie Huang

In text generation evaluation, many practical issues, such as inconsistent experimental settings and metric implementations, are often ignored but lead to unfair evaluation and untenable conclusions.

Text Generation

Language Generation with Multi-Hop Reasoning on Commonsense Knowledge Graph

1 code implementation EMNLP 2020 Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, Minlie Huang

Despite the success of generative pre-trained language models on a series of text generation tasks, they still suffer in cases where reasoning over underlying commonsense knowledge is required during generation.

Text Generation

Directed Acyclic Transformer Pre-training for High-quality Non-autoregressive Text Generation

1 code implementation24 Apr 2023 Fei Huang, Pei Ke, Minlie Huang

Non-AutoRegressive (NAR) text generation models have drawn much attention because of their significantly faster decoding speed and good generation quality in machine translation.

Machine Translation Text Generation

SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge

1 code implementation EMNLP 2020 Pei Ke, Haozhe Ji, Siyang Liu, Xiaoyan Zhu, Minlie Huang

To benefit the downstream tasks in sentiment analysis, we propose a novel language representation model called SentiLARE, which introduces word-level linguistic knowledge including part-of-speech tag and sentiment polarity (inferred from SentiWordNet) into pre-trained models.

Data Augmentation Language Modelling +3

JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs

1 code implementation Findings (ACL) 2021 Pei Ke, Haozhe Ji, Yu Ran, Xin Cui, LiWei Wang, Linfeng Song, Xiaoyan Zhu, Minlie Huang

Existing pre-trained models for knowledge-graph-to-text (KG-to-text) generation simply fine-tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which largely ignore the graph structure during encoding and lack elaborate pre-training tasks to explicitly model graph-text alignments.

Graph Reconstruction KG-to-Text Generation +3

ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors

1 code implementation26 Feb 2024 Zhexin Zhang, Yida Lu, Jingyuan Ma, Di Zhang, Rui Li, Pei Ke, Hao Sun, Lei Sha, Zhifang Sui, Hongning Wang, Minlie Huang

The safety of Large Language Models (LLMs) has gained increasing attention in recent years, but there still lacks a comprehensive approach for detecting safety issues within LLMs' responses in an aligned, customizable and explainable manner.

ARAML: A Stable Adversarial Training Framework for Text Generation

1 code implementation IJCNLP 2019 Pei Ke, Fei Huang, Minlie Huang, Xiaoyan Zhu

The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient.

reinforcement-learning Reinforcement Learning (RL) +1

Generating Informative Responses with Controlled Sentence Function

1 code implementation ACL 2018 Pei Ke, Jian Guan, Minlie Huang, Xiaoyan Zhu

Experiments show that our model outperforms state-of-the-art baselines, and it has the ability to generate responses with both controlled sentence function and informative content.

Position Sentence +2

Towards Efficient and Exact Optimization of Language Model Alignment

1 code implementation1 Feb 2024 Haozhe Ji, Cheng Lu, Yilin Niu, Pei Ke, Hongning Wang, Jun Zhu, Jie Tang, Minlie Huang

We prove that EXO is guaranteed to optimize in the same direction as the RL algorithms asymptotically for arbitary parametrization of the policy, while enables efficient optimization by circumventing the complexities associated with RL algorithms.

Language Modelling Reinforcement Learning (RL)

Tailoring Language Generation Models under Total Variation Distance

1 code implementation26 Feb 2023 Haozhe Ji, Pei Ke, Zhipeng Hu, Rongsheng Zhang, Minlie Huang

The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method.

Text Generation

Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning

1 code implementation6 Jun 2023 Chujie Zheng, Pei Ke, Zheng Zhang, Minlie Huang

It has always been an important yet challenging problem to control language models to avoid generating texts with undesirable attributes, such as toxic language and unnatural repetition.

Contrastive Learning Text Generation

Learning Instructions with Unlabeled Data for Zero-Shot Cross-Task Generalization

1 code implementation17 Oct 2022 Yuxian Gu, Pei Ke, Xiaoyan Zhu, Minlie Huang

Recently, instruction tuning (IT), which fine-tunes a pre-trained language model on a massive collection of tasks described via human-craft instructions, has been shown effective in instruction learning for unseen tasks.

Language Modelling

Unveiling the Implicit Toxicity in Large Language Models

1 code implementation29 Nov 2023 Jiaxin Wen, Pei Ke, Hao Sun, Zhexin Zhang, Chengfei Li, Jinfeng Bai, Minlie Huang

While recent studies primarily focus on probing toxic outputs that can be easily detected with existing toxicity classifiers, we show that LLMs can generate diverse implicit toxic outputs that are exceptionally difficult to detect via simply zero-shot prompting.

Language Modelling Reinforcement Learning (RL)

DecompEval: Evaluating Generated Texts as Unsupervised Decomposed Question Answering

1 code implementation13 Jul 2023 Pei Ke, Fei Huang, Fei Mi, Yasheng Wang, Qun Liu, Xiaoyan Zhu, Minlie Huang

Existing evaluation metrics for natural language generation (NLG) tasks face the challenges on generalization ability and interpretability.

Dialogue Generation nlg evaluation +3

Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization

1 code implementation15 Nov 2023 Zhexin Zhang, Junxiao Yang, Pei Ke, Minlie Huang

We hope our work could contribute to the comprehension of jailbreaking attacks and defenses, and shed light on the relationship between LLMs' capability and safety.

Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation

1 code implementation6 Jun 2022 Pei Ke, Haozhe Ji, Zhenyu Yang, Yi Huang, Junlan Feng, Xiaoyan Zhu, Minlie Huang

Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks.

Data-to-Text Generation Unsupervised Pre-training

Rethinking and Refining the Distinct Metric

1 code implementation ACL 2022 Siyang Liu, Sahand Sabour, Yinhe Zheng, Pei Ke, Xiaoyan Zhu, Minlie Huang

We provide both empirical and theoretical evidence to show that our method effectively removes the biases existing in the original distinct score.

Text Generation

Generating Commonsense Explanation by Extracting Bridge Concepts from Reasoning Paths

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Minlie Huang

Commonsense explanation generation aims to empower the machine's sense-making capability by generating plausible explanations to statements against commonsense.

Explanation Generation

A Text GAN for Language Generation with Non-Autoregressive Generator

no code implementations1 Jan 2021 Fei Huang, Jian Guan, Pei Ke, Qihan Guo, Xiaoyan Zhu, Minlie Huang

Despite the great success of Generative Adversarial Networks (GANs) in generating high-quality images, GANs for text generation still face two major challenges: first, most text GANs are unstable in training mainly due to ineffective optimization of the generator, and they heavily rely on maximum likelihood pretraining; second, most text GANs adopt autoregressive generators without latent variables, which largely limits the ability to learn latent representations for natural language text.

Decipherment Representation Learning +2

Semantic-Enhanced Explainable Finetuning for Open-Domain Dialogues

no code implementations6 Jun 2021 Yinhe Zheng, Yida Wang, Pei Ke, Zhenyu Yang, Minlie Huang

This paper propose to combine pretrained language models with the modular dialogue paradigm for open-domain dialogue modeling.

Informativeness Language Modelling +1

Language Model Decoding as Direct Metrics Optimization

no code implementations2 Oct 2023 Haozhe Ji, Pei Ke, Hongning Wang, Minlie Huang

And most importantly, we prove that this induced distribution is guaranteed to improve the perplexity on human texts, which suggests a better approximation to the underlying distribution of human texts.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.