Learning to Execute
13 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Learning to Execute
Latest papers
EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning
On average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002 by 5% in numerical tasks and 13% in reading comprehension tasks.
Latent Space Representations of Neural Algorithmic Reasoners
Neural Algorithmic Reasoning (NAR) is a research area focused on designing neural architectures that can reliably capture classical computation, usually by learning to execute algorithms.
A Generalist Neural Algorithmic Learner
The cornerstone of neural algorithmic reasoning is the ability to solve algorithmic tasks, especially in a way that generalises out of distribution.
Unveiling Transformers with LEGO: a synthetic reasoning task
We study how the trained models eventually succeed at the task, and in particular, we manage to understand some of the attention heads as well as how the information flows in the network.
The CLRS Algorithmic Reasoning Benchmark
Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.
Learning to Execute Actions or Ask Clarification Questions
In this paper, we extend the Minecraft Corpus Dataset by annotating all builder utterances into eight types, including clarification questions, and propose a new builder agent model capable of determining when to ask or execute instructions.
Static Prediction of Runtime Errors by Learning to Execute Programs with External Resource Descriptions
This presents an interesting machine learning challenge: can we predict runtime errors in a "static" setting, where program execution is not possible?
Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics
Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand.
ProTo: Program-Guided Transformer for Program-Guided Tasks
Furthermore, we propose the Program-guided Transformer (ProTo), which integrates both semantic and structural guidance of a program by leveraging cross-attention and masked self-attention to pass messages between the specification and routines in the program.
Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks
More practically, we evaluate these models on the task of learning to execute partial programs, as might arise if using the model as a heuristic function in program synthesis.