Program induction

22 papers with code • 0 benchmarks • 1 datasets

Generating program code for domain-specific tasks

Datasets


Most implemented papers

Knowledge Refactoring for Inductive Program Synthesis

sebdumancic/knorf_aaai21 21 Apr 2020

We introduce the \textit{knowledge refactoring} problem, where the goal is to restructure a learner's knowledge base to reduce its size and to minimise redundancy in it.

Automatic Discovery of Interpretable Planning Strategies

RationalityEnhancement/InterpretableStrategyDiscovery 24 May 2020

Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule.

Strong Generalization and Efficiency in Neural Programs

hardbyte/sorting-gym 7 Jul 2020

We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.

Few-Shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning

DevinJake/MRL-CQA EMNLP 2020

Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.

Learning a Deep Generative Model like a Program: the Free Category Prior

esennesh/categorical_bpl 22 Nov 2020

Humans surpass the cognitive abilities of most other animals in our ability to "chunk" concepts into words, and then combine the words to combine the concepts.

Program Transfer for Answering Complex Questions over Knowledge Bases

thu-keg/programtransfer ACL 2022

In this paper, we propose the approach of program transfer, which aims to leverage the valuable program annotations on the rich-resourced KBs as external supervision signals to aid program induction for the low-resourced KBs that lack program annotations.

Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

gonzalojaimovitch/think-big-teach-small NeurIPS 2021

Large language models have recently shown a remarkable ability for few-shot learning, including patterns of algorithmic nature.

ArcaneQA: Dynamic Program Induction and Contextualized Encoding for Knowledge Base Question Answering

dki-lab/arcaneqa COLING 2022

Question answering on knowledge bases (KBQA) poses a unique challenge for semantic parsing research due to two intertwined challenges: large search space and ambiguities in schema linking.

Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines

sreejank/language_and_programs 23 May 2022

Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.