Chinese Reading Comprehension
5 papers with code • 10 benchmarks • 8 datasets
Most implemented papers
XLNet: Generalized Autoregressive Pretraining for Language Understanding
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
Dataset for the First Evaluation on Chinese Machine Reading Comprehension
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion
The release of ReCO consists of 300k questions that to our knowledge is the largest in Chinese reading comprehension.
Natural Response Generation for Chinese Reading Comprehension
To this end, we construct a new dataset called Penguin to promote the research of MRC, providing a training and test bed for natural response generation to real scenarios.