Efficient softmax approximation for GPUs

ICML 2017 Edouard Grave • Armand Joulin • Moustapha Cissé • David Grangier • Hervé Jégou

We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computation time. Our approach further reduces the computational time by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units.

PDF Abstract

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.