Search Results for author: Ilia Kulikov

Found 6 papers, 5 papers with code

Mode recovery in neural autoregressive sequence modeling

1 code implementation10 Jun 2021 Ilia Kulikov, Sean Welleck, Kyunghyun Cho

We propose to study these phenomena by investigating how the modes, or local maxima, of a distribution are maintained throughout the full learning chain of the ground-truth, empirical, learned and decoding-induced distributions, via the newly proposed mode recovery cost.

Consistency of a Recurrent Language Model With Respect to Incomplete Decoding

1 code implementation EMNLP 2020 Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, Kyunghyun Cho

Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition.

Language Modelling

Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training

no code implementations ACL 2020 Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, Jason Weston

Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address.

Neural Text Generation with Unlikelihood Training

1 code implementation ICLR 2020 Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, Jason Weston

Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core.

Text Generation

Multi-Turn Beam Search for Neural Dialogue Modeling

1 code implementation1 Jun 2019 Ilia Kulikov, Jason Lee, Kyunghyun Cho

We propose a novel approach for conversation-level inference by explicitly modeling the dialogue partner and running beam search across multiple conversation turns.

Cannot find the paper you are looking for? You can Submit a new open access paper.