DocBERT: BERT for Document Classification

17 Apr 2019  ·  Ashutosh Adhikari, Achyudh Ram, Raphael Tang, Jimmy Lin ·

We present, to our knowledge, the first application of BERT to document classification. A few characteristics of the task might lead one to think that BERT is not the most appropriate model: syntactic structures matter less for content categories, documents can often be longer than typical BERT input, and documents often have multiple labels. Nevertheless, we show that a straightforward classification model using BERT is able to achieve the state of the art across four popular datasets. To address the computational expense associated with BERT inference, we distill knowledge from BERT-large to small bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x fewer parameters. The primary contribution of our paper is improved baselines that can provide the foundation for future work.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Document Classification AAPD KD-LSTMreg F1 72.9 # 1
Text Classification arXiv-10 DocBERT Accuracy 0.764 # 3
Document Classification Reuters-21578 KD-LSTMreg F1 88.9 # 3
Document Classification Yelp-14 KD-LSTMreg Accuracy 69.4 # 1

Results from Other Papers

Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Clinical Note Phenotyping I2B2 2006: Smoking DocBert Adhikari et al. (2019) Micro F1 80.2 # 2
Clinical Note Phenotyping I2B2 2008: Obesity DocBert Adhikari et al. (2019) Micro F1 67.6 # 3