1 code implementation • Findings (ACL) 2022 • Chia-Chien Hung, Anne Lauscher, Simone Ponzetto, Goran Glavaš
Recent work has shown that self-supervised dialog-specific pretraining on large conversational datasets yields substantial gains over traditional language modeling (LM) pretraining in downstream task-oriented dialog (TOD).