Neural Speed Reading Audited

Several approaches to neural speed reading have been presented at major NLP and machine learning conferences in 2017{--}20; i.e., {``}human-inspired{''} recurrent network architectures that learn to {``}read{''} text faster by skipping irrelevant words, typically optimizing the joint objective of minimizing classification error rate and FLOPs used at inference time. This paper reflects on the meaningfulness of the speed reading task, showing that (a) better and faster approaches to, say, document classification, already exist, which also learn to ignore part of the input (I give an example with 7{\%} error reduction and a 136x speed-up over the state of the art in neural speed reading); and that (b) any claims that neural speed reading is {``}human-inspired{''}, are ill-founded.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here