Search Results for author: Catherine Wong

Found 9 papers, 5 papers with code

Identifying concept libraries from language about object structure

1 code implementation11 May 2022 Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan

Our understanding of the visual world goes beyond naming objects, encompassing our ability to parse objects into meaningful parts, attributes, and relations.

2k Machine Translation +2

Leveraging Language to Learn Program Abstractions and Search Heuristics

no code implementations18 Jun 2021 Catherine Wong, Kevin Ellis, Joshua B. Tenenbaum, Jacob Andreas

Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems.

Program Synthesis

Communicating Natural Programs to Humans and Machines

2 code implementations15 Jun 2021 Samuel Acquaviva, Yewen Pu, Marta Kryven, Theodoros Sechopoulos, Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum

We present LARC, the \textit{Language-complete ARC}: a collection of natural language descriptions by a group of human participants who instruct each other on how to solve ARC tasks using language alone, which contains successful instructions for 88\% of the ARC tasks.

Program Synthesis

DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning

3 code implementations15 Jun 2020 Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum

It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.

Drawing Pictures Program induction +1

Amanuensis: The Programmer's Apprentice

no code implementations29 Jun 2018 Thomas Dean, Maurice Chiang, Marcus Gomez, Nate Gruver, Yousef Hindy, Michelle Lam, Peter Lu, Sophia Sanchez, Rohun Saxena, Michael Smith, Lucy Wang, Catherine Wong

This document provides an overview of the material covered in a course taught at Stanford in the spring quarter of 2018.

Transfer Learning with Neural AutoML

no code implementations NeurIPS 2018 Catherine Wong, Neil Houlsby, Yifeng Lu, Andrea Gesmundo

We extend RL-based architecture search methods to support parallel training on multiple tasks and then transfer the search strategy to new tasks.

General Classification Image Classification +2

DANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation

1 code implementation14 Dec 2017 Catherine Wong

In this work, we introduce DANCin SEQ2SEQ, a GAN-inspired algorithm for adversarial text example generation targeting largely black-box text classifiers.

Adversarial Text

Transfer Learning to Learn with Multitask Neural Model Search

no code implementations ICLR 2018 Catherine Wong, Andrea Gesmundo

We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task.

Hyperparameter Optimization Neural Architecture Search +1

Cannot find the paper you are looking for? You can Submit a new open access paper.