Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing, but no previous work has explored the possibility of whether abstractive dialogue summarization can also be used as a means to boost an NLP system's performance on other important dialogue comprehension tasks.
Using human evaluation and automatic faithfulness metrics, we show that our model significantly reduces all kinds of factual errors on the dialogue summarization, SAMSum corpus.
The primary focus of recent work with largescale transformers has been on optimizing the amount of information packed into the model's parameters.
Current pre-trained models applied to summarization are prone to factual inconsistencies which either misrepresent the source text or introduce extraneous information.
While online conversations can cover a vast amount of information in many different formats, abstractive text summarization has primarily focused on modeling solely news articles.
Modeling and prediction of human motion dynamics has long been a challenging problem in computer vision, and most existing methods rely on the end-to-end supervised training of various architectures of recurrent neural networks.
Ranked #2 on Human Pose Forecasting on Human3.6M (MAR, walking, 1,000ms metric)
In this paper, we propose a new action-agnostic method for short- and long-term human pose forecasting.
Ranked #5 on Human Pose Forecasting on Human3.6M (MAR, walking, 1,000ms metric)
Learning general latent-variable probabilistic graphical models is a key theoretical challenge in machine learning and artificial intelligence.