Search Results for author: Luke Dai

Found 5 papers, 1 papers with code

Improving Open-Domain Dialogue Evaluation with a Causal Inference Model

no code implementations31 Jan 2023 Cat P. Le, Luke Dai, Michael Johnston, Yang Liu, Marilyn Walker, Reza Ghanadan

We project these features to the dialogue level and train a dialogue-level MLP regression model, a dialogue-level LSTM, and a novel causal inference model called counterfactual-LSTM (CF-LSTM) to predict ratings.

Causal Inference counterfactual +1

Can Transformer Models Measure Coherence In Text? Re-Thinking the Shuffle Test

1 code implementation ACL 2021 Philippe Laban, Luke Dai, Lucas Bandarkar, Marti A. Hearst

The Shuffle Test is the most common task to evaluate whether NLP models can measure coherence in text.

Cannot find the paper you are looking for? You can Submit a new open access paper.