Search Results for author: Abigail See

Found 9 papers, 5 papers with code

Understanding and predicting user dissatisfaction in a neural generative chatbot

no code implementations SIGDIAL (ACL) 2021 Abigail See, Christopher Manning

We find that unclear user utterances are a major source of generative errors such as ignoring, hallucination, unclearness and repetition.

Chatbot Hallucination

Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations

no code implementations27 Aug 2020 Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Christopher D. Manning

At the end of the competition, Chirpy Cardinal progressed to the finals with an average rating of 3. 6/5. 0, a median conversation duration of 2 minutes 16 seconds, and a 90th percentile duration of over 12 minutes.

World Knowledge

Do Massively Pretrained Language Models Make Better Storytellers?

1 code implementation CONLL 2019 Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning

Large neural language models trained on massive amounts of text have emerged as a formidable strategy for Natural Language Understanding tasks.

Natural Language Understanding Story Generation

What makes a good conversation? How controllable attributes affect human judgments

2 code implementations NAACL 2019 Abigail See, Stephen Roller, Douwe Kiela, Jason Weston

A good conversation requires balance -- between simplicity and detail; staying on topic and changing it; asking questions and answering them.

Specificity Text Generation

Get To The Point: Summarization with Pointer-Generator Networks

39 code implementations ACL 2017 Abigail See, Peter J. Liu, Christopher D. Manning

Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).

Abstractive Text Summarization Document Summarization +1

Compression of Neural Machine Translation Models via Pruning

1 code implementation CONLL 2016 Abigail See, Minh-Thang Luong, Christopher D. Manning

Neural Machine Translation (NMT), like many other deep learning domains, typically suffers from over-parameterization, resulting in large storage sizes.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.