Search Results for author: Kris Cao

Found 10 papers, 5 papers with code

Towards Coherent and Consistent Use of Entities in Narrative Generation

no code implementations3 Feb 2022 Pinelopi Papalampidi, Kris Cao, Tomas Kocisky

Large pre-trained language models (LMs) have demonstrated impressive capabilities in generating long, fluent text; however, there is little to no analysis on their ability to maintain entity coherence and consistency.

Control Prefixes for Parameter-Efficient Text Generation

2 code implementations15 Oct 2021 Jordan Clive, Kris Cao, Marek Rei

Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to a downstream application.

Abstractive Text Summarization Data-to-Text Generation +2

You should evaluate your language model on marginal likelihood over tokenisations

no code implementations EMNLP 2021 Kris Cao, Laura Rimell

We suggest that this approach is unsatisfactory and may bottleneck our evaluation of language model performance.

Language Modelling

Mind the Gap: Assessing Temporal Generalization in Neural Language Models

1 code implementation NeurIPS 2021 Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, Phil Blunsom

Hence, given the compilation of ever-larger language modelling datasets, combined with the growing list of language-model-based NLP applications that require up-to-date factual knowledge about the world, we argue that now is the right time to rethink the static way in which we currently train and evaluate our language models, and develop adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world.

Language Modelling

Modelling Latent Skills for Multitask Language Generation

no code implementations21 Feb 2020 Kris Cao, Dani Yogatama

We show that our latent task variable model outperforms other sequence-to-sequence baselines on average across tasks in the multitask setting.

Few-Shot Learning Text Generation

Factorising AMR generation through syntax

no code implementations NAACL 2019 Kris Cao, Stephen Clark

Generating from Abstract Meaning Representation (AMR) is an underspecified problem, as many syntactic decisions are not constrained by the semantic graph.

Emergent Communication through Negotiation

1 code implementation ICLR 2018 Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z. Leibo, Karl Tuyls, Stephen Clark

We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation.

Multi-agent Reinforcement Learning

Latent Variable Dialogue Models and their Diversity

1 code implementation EACL 2017 Kris Cao, Stephen Clark

We present a dialogue generation model that directly captures the variability in possible responses to a given input, which reduces the `boring output' issue of deterministic dialogue models.

Dialogue Generation

A Joint Model for Word Embedding and Word Morphology

no code implementations WS 2016 Kris Cao, Marek Rei

This paper presents a joint model for performing unsupervised morphological analysis on words, and learning a character-level composition function from morphemes to word embeddings.

Morphological Analysis Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.