Learning to Execute
13 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Learning to Execute
Most implemented papers
Universal Transformers
Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times.
Learning to Execute
Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are widely used because they are expressive and are easy to train.
Neural Execution Engines: Learning to Execute Subroutines
A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms.
Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks
More practically, we evaluate these models on the task of learning to execute partial programs, as might arise if using the model as a heuristic function in program synthesis.
ProTo: Program-Guided Transformer for Program-Guided Tasks
Furthermore, we propose the Program-guided Transformer (ProTo), which integrates both semantic and structural guidance of a program by leveraging cross-attention and masked self-attention to pass messages between the specification and routines in the program.
Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics
Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand.
Static Prediction of Runtime Errors by Learning to Execute Programs with External Resource Descriptions
This presents an interesting machine learning challenge: can we predict runtime errors in a "static" setting, where program execution is not possible?
Learning to Execute Actions or Ask Clarification Questions
In this paper, we extend the Minecraft Corpus Dataset by annotating all builder utterances into eight types, including clarification questions, and propose a new builder agent model capable of determining when to ask or execute instructions.
The CLRS Algorithmic Reasoning Benchmark
Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.
Unveiling Transformers with LEGO: a synthetic reasoning task
We study how the trained models eventually succeed at the task, and in particular, we manage to understand some of the attention heads as well as how the information flows in the network.