Program Synthesis
137 papers with code • 3 benchmarks • 5 datasets
Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.
Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.
Libraries
Use these libraries to find Program Synthesis models and implementationsSubtasks
Latest papers with no code
Synapse: Learning Preferential Concepts from Visual Demonstrations
This paper addresses the problem of preference learning, which aims to learn user-specific preferences (e. g., "good parking spot", "convenient drop-off location") from visual input.
Guiding Enumerative Program Synthesis with Large Language Models
In this paper, we evaluate the abilities of LLMs to solve formal synthesis benchmarks by carefully crafting a library of prompts for the domain.
Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code Large Language Models
Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).
Enforcing Temporal Constraints on Generative Agent Behavior with Reactive Synthesis
Our approach uses Temporal Stream Logic (TSL) to generate an automaton that enforces a temporal structure on an agent and leaves the details of each action for a moment in time to an LLM.
Origami: (un)folding the abstraction of recursion schemes for program synthesis
Program synthesis with Genetic Programming searches for a correct program that satisfies the input specification, which is usually provided as input-output examples.
LTL learning on GPUs
Linear temporal logic (LTL) is widely used in industrial verification.
WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment
We give a model-based agent that builds a Python program representing its knowledge of the world based on its interactions with the environment.
CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay
Our method iterates between 1) program sampling and hindsight relabeling, and 2) learning from prioritized experience replay.
Open-Universe Indoor Scene Generation using LLM Program Synthesis and Uncurated Object Databases
Unlike most prior work on indoor scene generation, our system does not require a large training dataset of existing 3D scenes.
Runtime phylogenetic analysis enables extreme subsampling for test-based problems
We introduce phylogeny-informed subsampling, a new class of subsampling methods that exploit runtime phylogenetic analyses for solving test-based problems.