16 papers with code • 0 benchmarks • 1 datasets
Generating program code for domain-specific tasks
DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.
Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer.
Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation.
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates.
Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule.