Search Results for author: Tianxing He

Found 19 papers, 11 papers with code

Can Language Models Solve Graph Problems in Natural Language?

1 code implementation17 May 2023 Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, Yulia Tsvetkov

We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.

Knowledge Probing Multi-hop Question Answering +1

On the Blind Spots of Model-Based Evaluation Metrics for Text Generation

1 code implementation20 Dec 2022 Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, Yulia Tsvetkov

In this work, we explore a useful but often neglected methodology for robustness analysis of text generation evaluation metrics: stress tests with synthetic data.

Text Generation

PCFG-based Natural Language Interface Improves Generalization for Controlled Text Generation

1 code implementation14 Oct 2022 Jingyu Zhang, James Glass, Tianxing He

Existing work on controlled text generation (CTG) assumes a control interface of categorical attributes.

Text Generation

Controlling the Focus of Pretrained Language Generation Models

1 code implementation Findings (ACL) 2022 Jiabao Ji, Yoon Kim, James Glass, Tianxing He

This work aims to develop a control mechanism by which a user can select spans of context as "highlights" for the model to focus on, and generate relevant output.

Abstractive Text Summarization Response Generation +1

Revisiting Latent-Space Interpolation via a Quantitative Evaluation Framework

1 code implementation13 Oct 2021 Lu Mi, Tianxing He, Core Francisco Park, Hao Wang, Yue Wang, Nir Shavit

In this work, we show how data labeled with semantically continuous attributes can be utilized to conduct a quantitative evaluation of latent-space interpolation algorithms, for variational autoencoders.

An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models

1 code implementation6 Sep 2021 Tianxing He, Kyunghyun Cho, James Glass

Prompt-based knowledge probing for 1-hop relations has been used to measure how much world knowledge is stored in pretrained language models.

Knowledge Probing Prompt Engineering

Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models

1 code implementation EACL 2021 Tianxing He, Bryan McCann, Caiming Xiong, Ehsan Hosseini-Asl

In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e. g., Roberta) for natural language understanding (NLU) tasks.

Language Modelling Natural Language Understanding

Quantifying Exposure Bias for Open-ended Language Generation

no code implementations28 Sep 2020 Tianxing He, Jingzhao Zhang, Zhiming Zhou, James R. Glass

The exposure bias problem refers to the incrementally distorted generation induced by the training-generation discrepancy, in teacher-forcing training for auto-regressive neural network language models (LM).

Text Generation

Analyzing the Forgetting Problem in the Pretrain-Finetuning of Dialogue Response Models

no code implementations16 Oct 2019 Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng

We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.

Response Generation Text Generation +1

Negative Training for Neural Dialogue Response Generation

1 code implementation ACL 2020 Tianxing He, James Glass

Although deep learning models have brought tremendous advancements to the field of open-domain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (boring) responses.

Response Generation

Detecting egregious responses in neural sequence-to-sequence models

no code implementations ICLR 2019 Tianxing He, James Glass

We adopt an empirical methodology, in which we first create lists of egregious output sequences, and then design a discrete optimization algorithm to find input sequences that will cause the model to generate them.

Response Generation

On Training Bi-directional Neural Network Language Model with Noise Contrastive Estimation

1 code implementation19 Feb 2016 Tianxing He, Yu Zhang, Jasha Droppo, Kai Yu

We propose to train bi-directional neural network language model(NNLM) with noise contrastive estimation(NCE).

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.