Syntactically Supervised Transformers for Faster Neural Machine Translation

ACL 2019 Nader AkouryKalpesh KrishnaMohit Iyyer

Standard decoders for neural machine translation autoregressively generate a single target token per time step, which slows inference especially for long outputs. While architectural advances such as the Transformer fully parallelize the decoder computations at training time, inference still proceeds sequentially... (read more)

PDF Abstract ACL 2019 PDF ACL 2019 Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper