no code implementations • EMNLP (NLP4ConvAI) 2021 • Xinxian Huang, Huang He, Siqi Bao, Fan Wang, Hua Wu, Haifeng Wang
Large-scale conversation models are turning to leveraging external knowledge to improve the factual accuracy in response generation.
no code implementations • 23 Dec 2021 • Xin Tian, Xinxian Huang, Dongfeng He, Yingzhan Lin, Siqi Bao, Huang He, Liankai Huang, Qiang Ju, Xiyuan Zhang, Jian Xie, Shuqi Sun, Fan Wang, Hua Wu, Haifeng Wang
Task-oriented dialogue systems have been plagued by the difficulties of obtaining large-scale and high-quality annotated conversations.
3 code implementations • 20 Sep 2021 • Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhihua Wu, Zhen Guo, Hua Lu, Xinxian Huang, Xin Tian, Xinchao Xu, Yingzhan Lin, Zheng-Yu Niu
To explore the limit of dialogue generation pre-training, we present the models of PLATO-XL with up to 11 billion parameters, trained on both Chinese and English social media conversations.