We hope that this first survey of dialogue summarization can provide the community with a quick access and a general picture to this task and motivate future researches.
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
Recently, various neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries by learning to map input text to output text.
First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations.
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.
Neural table-to-text models, which select and order salient data, as well as verbalizing them fluently via surface realization, have achieved promising progress.
In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.
In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
Recent neural models for data-to-text generation rely on massive parallel pairs of data and text to learn the writing knowledge.
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multi-pass decoding mechanism into conventional NMT is not well explored.
Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world.
Results show that our knowledge-aware model outperforms the state-of-the-art approaches.
We present a generative model to map natural language questions into SQL queries.
Ranked #4 on Code Generation on WikiSQL
Knowledge base (KB) such as Freebase plays an important role for many natural language processing tasks.
Instead of directly relying on word alignment results, this framework combines advantages of rule-based methods and deep learning methods by implementing two steps: First, generates a high-confidence entity annotation set on IL side with strict searching methods; Second, uses this high-confidence set to weakly supervise the model training.
Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence.