Search Results for author: Kurt Shuster

Found 18 papers, 8 papers with code

Internet-Augmented Dialogue Generation

no code implementations15 Jul 2021 Mojtaba Komeili, Kurt Shuster, Jason Weston

The largest store of continually updating knowledge on our planet can be accessed via internet search.

Dialogue Generation

Retrieval Augmentation Reduces Hallucination in Conversation

no code implementations15 Apr 2021 Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, Jason Weston

Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue models often suffer from factual incorrectness and hallucination of knowledge (Roller et al., 2020).

Multi-Modal Open-Domain Dialogue

no code implementations2 Oct 2020 Kurt Shuster, Eric Michael Smith, Da Ju, Jason Weston

Recent work in open-domain conversational agents has demonstrated that significant improvements in model engagingness and humanness metrics can be achieved via massive scaling in both pre-training data and model size (Adiwardana et al., 2020; Roller et al., 2020).

Visual Dialog

Deploying Lifelong Open-Domain Dialogue Learning

no code implementations18 Aug 2020 Kurt Shuster, Jack Urbanek, Emily Dinan, Arthur Szlam, Jason Weston

As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013).

Image-Chat: Engaging Grounded Conversations

no code implementations ACL 2020 Kurt Shuster, Samuel Humeau, Antoine Bordes, Jason Weston

To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019).

Open-Domain Conversational Agents: Current Progress, Open Problems, and Future Directions

no code implementations22 Jun 2020 Stephen Roller, Y-Lan Boureau, Jason Weston, Antoine Bordes, Emily Dinan, Angela Fan, David Gunning, Da Ju, Margaret Li, Spencer Poff, Pratik Ringshia, Kurt Shuster, Eric Michael Smith, Arthur Szlam, Jack Urbanek, Mary Williamson

We present our view of what is necessary to build an engaging open-domain conversational agent: covering the qualities of such an agent, the pieces of the puzzle that have been built so far, and the gaping holes we have not filled yet.

Continual Learning

Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring

2 code implementations ICLR 2020 Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, Jason Weston

The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2018).

All-in-One Image-Grounded Conversational Agents

no code implementations28 Dec 2019 Da Ju, Kurt Shuster, Y-Lan Boureau, Jason Weston

As single-task accuracy on individual language and image tasks has improved substantially in the last few years, the long-term goal of a generally skilled agent that can both see and talk becomes more feasible to explore.

The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents

no code implementations ACL 2020 Kurt Shuster, Da Ju, Stephen Roller, Emily Dinan, Y-Lan Boureau, Jason Weston

We introduce dodecaDialogue: a set of 12 tasks that measures if a conversational agent can communicate engagingly with personality and empathy, ask questions, answer questions by utilizing knowledge resources, discuss topics and situations, and perceive and converse about images.

Wizard of Wikipedia: Knowledge-Powered Conversational agents

2 code implementations ICLR 2019 Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, Jason Weston

In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date.

Dialogue Generation

Image Chat: Engaging Grounded Conversations

3 code implementations2 Nov 2018 Kurt Shuster, Samuel Humeau, Antoine Bordes, Jason Weston

To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019).

Engaging Image Captioning Via Personality

no code implementations CVPR 2019 Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, Jason Weston

While such tasks are useful to verify that a machine understands the content of an image, they are not engaging to humans as captions.

Image Captioning

Talk the Walk: Navigating New York City through Grounded Dialogue

1 code implementation9 Jul 2018 Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, Douwe Kiela

We introduce "Talk The Walk", the first large-scale dialogue dataset grounded in action and perception.

Cannot find the paper you are looking for? You can Submit a new open access paper.