WS 2019

Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU

WS 2019 RasaHQ/rasa

We present a new neural architecture for wide-coverage Natural Language Understanding in Spoken Dialogue Systems.

SPOKEN DIALOGUE SYSTEMS

Collaborative Multi-Agent Dialogue Model Training Via Reinforcement Learning

WS 2019 uber-research/plato-research-dialogue-system

Using DSTC2 as seed data, we trained natural language understanding (NLU) and generation (NLG) networks for each agent and let the agents interact online.

Structured Fusion Networks for Dialog

WS 2019 Shikib/structured_fusion_networks

Neural dialog models have exhibited strong performance, however their end-to-end nature lacks a representation of the explicit structure of dialog.

Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References

WS 2019 prakharguptaz/multirefeval

The aim of this paper is to mitigate the shortcomings of automatic evaluation of open-domain dialog systems through multi-reference evaluation.

Flexibly-Structured Model for Task-Oriented Dialogues

WS 2019 uber-research/FSDM

It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot.

TASK-ORIENTED DIALOGUE SYSTEMS TEXT GENERATION

Is Multilingual BERT Fluent in Language Generation?

WS 2019 TurkuNLP/bert-eval

The multilingual BERT model is trained on 104 languages and meant to serve as a universal language model and tool for encoding sentences.

LANGUAGE MODELLING TEXT GENERATION