Mining Discourse Markers for Unsupervised Sentence Representation Learning

Current state of the art systems in NLP heavily rely on manually annotated datasets, which are expensive to construct. Very little work adequately exploits unannotated data -- such as discourse markers between sentences -- mainly because of data sparseness and ineffective extraction methods. In the present work, we propose a method to automatically discover sentence pairs with relevant discourse markers, and apply it to massive amounts of data. Our resulting dataset contains 174 discourse markers with at least 10k examples each, even for rare markers such as coincidentally or amazingly We use the resulting data as supervision for learning transferable sentence embeddings. In addition, we show that even though sentence representation learning through prediction of discourse markers yields state of the art results across different transfer tasks, it is not clear that our models made use of the semantic relation between sentences, thus leaving room for further improvements. Our datasets are publicly available (

PDF Abstract NAACL 2019 PDF NAACL 2019 Abstract


Introduced in the Paper:

Discovery Dataset
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Classification Discovery Dataset BERT 1:1 Accuracy 20.6 # 1


No methods listed for this paper. Add relevant methods here