The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks.
Bilinear models provide rich representations compared with linear models.
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
Ranked #1 on Open-Domain Question Answering on SQuAD1.1
We introduce ParlAI (pronounced "par-lay"), an open-source software platform for dialog research implemented in Python, available at http://parl. ai.
We show that, when using SeeKeR as a dialogue model, it outperforms the state-of-the-art model BlenderBot 2 (Chen et al., 2021) on open-domain knowledge-grounded conversations for the same number of parameters, in terms of consistency, knowledge and per-turn engagingness.
Current language models achieve low perplexity but their resulting generations still suffer from toxic responses, repetitiveness and contradictions.
Open-domain conversation models have become good at generating natural-sounding dialogue, using very large architectures with billions of trainable parameters.
A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.
Teaching machines to read natural language documents remains an elusive challenge.
Ranked #13 on Question Answering on CNN / Daily Mail