Search Results for author: Andy Coenen

Found 9 papers, 2 papers with code

The Case for a Single Model that can Both Generate Continuations and Fill-in-the-Blank

no code implementations Findings (NAACL) 2022 Daphne Ippolito, Liam Dugan, Emily Reif, Ann Yuan, Andy Coenen, Chris Callison-Burch

While previous work has tackled this problem with models trained specifically to do fill in the blank, a more useful model is one that can effectively perform _both_ FitB and continuation tasks.

Position Text Generation

Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers

no code implementations9 Nov 2022 Daphne Ippolito, Ann Yuan, Andy Coenen, Sehmon Burnam

Recent developments in natural language generation (NLG) using neural language models have brought us closer than ever to the goal of building AI-powered creative writing tools.

Text Generation

The Case for a Single Model that can Both Generate Continuations and Fill in the Blank

no code implementations9 Jun 2022 Daphne Ippolito, Liam Dugan, Emily Reif, Ann Yuan, Andy Coenen, Chris Callison-Burch

The task of inserting text into a specified position in a passage, known as fill in the blank (FitB), is useful for a variety of applications where writers interact with a natural language generation (NLG) system to craft text.

Position Text Generation

SynthBio: A Case Study in Human-AI Collaborative Curation of Text Datasets

no code implementations11 Nov 2021 Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, Sebastian Gehrmann

We use our method to curate SynthBio - a new evaluation set for WikiBio - composed of structured attribute lists describing fictional individuals, mapped to natural language biographies.

Attribute Language Modelling +2

Wordcraft: a Human-AI Collaborative Editor for Story Writing

no code implementations15 Jul 2021 Andy Coenen, Luke Davis, Daphne Ippolito, Emily Reif, Ann Yuan

As neural language models grow in effectiveness, they are increasingly being applied in real-world settings.

Few-Shot Learning

An Interpretability Illusion for BERT

no code implementations14 Apr 2021 Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda Viégas, Martin Wattenberg

We describe an "interpretability illusion" that arises when analyzing the BERT model.

Cannot find the paper you are looking for? You can Submit a new open access paper.