no code implementations • EMNLP (NLP4ConvAI) 2021 • Peiyao Wang, Joyce Fang, Julia Reinspach
Large-scale pretrained transformer models have demonstrated state-of-the-art (SOTA) performance in a variety of NLP tasks.
no code implementations • COLING 2022 • Lahari Poddar, Peiyao Wang, Julia Reinspach
In this paper we propose a framework that incorporates augmented versions of a dialogue context into the learning objective.