1 code implementation • 11 May 2022 • Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan
Our understanding of the visual world goes beyond naming objects, encompassing our ability to parse objects into meaningful parts, attributes, and relations.
1 code implementation • 11 May 2022 • Katherine M. Collins, Catherine Wong, Jiahai Feng, Megan Wei, Joshua B. Tenenbaum
We first contribute a new challenge benchmark for comparing humans and distributional large language models (LLMs).
no code implementations • 18 Jun 2021 • Catherine Wong, Kevin Ellis, Joshua B. Tenenbaum, Jacob Andreas
Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems.
2 code implementations • 15 Jun 2021 • Samuel Acquaviva, Yewen Pu, Marta Kryven, Theodoros Sechopoulos, Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum
We present LARC, the \textit{Language-complete ARC}: a collection of natural language descriptions by a group of human participants who instruct each other on how to solve ARC tasks using language alone, which contains successful instructions for 88\% of the ARC tasks.
3 code implementations • 15 Jun 2020 • Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.
no code implementations • 29 Jun 2018 • Thomas Dean, Maurice Chiang, Marcus Gomez, Nate Gruver, Yousef Hindy, Michelle Lam, Peter Lu, Sophia Sanchez, Rohun Saxena, Michael Smith, Lucy Wang, Catherine Wong
This document provides an overview of the material covered in a course taught at Stanford in the spring quarter of 2018.
no code implementations • NeurIPS 2018 • Catherine Wong, Neil Houlsby, Yifeng Lu, Andrea Gesmundo
We extend RL-based architecture search methods to support parallel training on multiple tasks and then transfer the search strategy to new tasks.
1 code implementation • 14 Dec 2017 • Catherine Wong
In this work, we introduce DANCin SEQ2SEQ, a GAN-inspired algorithm for adversarial text example generation targeting largely black-box text classifiers.
no code implementations • ICLR 2018 • Catherine Wong, Andrea Gesmundo
We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task.