Context-Free Transductions with Neural Stacks

WS 2018 Yiding HaoWilliam MerrillDana AngluinRobert FrankNoah AmselAndrew BenzSimon Mendelsohn

This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation... (read more)

PDF Abstract

Evaluation Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.