Search Results for author: Oscar Chang

Found 9 papers, 1 papers with code

Assessing SATNet's Ability to Solve the Symbol Grounding Problem

no code implementations NeurIPS 2020 Oscar Chang, Lampros Flokas, Hod Lipson, Michael Spranger

We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general.

Principled Weight Initialization for Hypernetworks

no code implementations ICLR 2020 Oscar Chang, Lampros Flokas, Hod Lipson

Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner.

Multi-Task Learning

Ensemble Model Patching: A Parameter-Efficient Variational Bayesian Neural Network

no code implementations23 May 2019 Oscar Chang, Yuling Yao, David Williams-King, Hod Lipson

Two main obstacles preventing the widespread adoption of variational Bayesian neural networks are the high parameter overhead that makes them infeasible on large networks, and the difficulty of implementation, which can be thought of as "programming overhead."

Seven Myths in Machine Learning Research

no code implementations18 Feb 2019 Oscar Chang, Hod Lipson

We present seven myths commonly believed to be true in machine learning research, circa Feb 2019.

Agent Embeddings: A Latent Representation for Pole-Balancing Networks

no code implementations12 Nov 2018 Oscar Chang, Robert Kwiatkowski, Siyuan Chen, Hod Lipson

Linearly interpolating between the latent embeddings for a good agent and a bad agent yields an agent embedding that generates a network with intermediate performance, where the performance can be tuned according to the coefficient of interpolation.

PepCVAE: Semi-Supervised Targeted Design of Antimicrobial Peptide Sequences

no code implementations17 Oct 2018 Payel Das, Kahini Wadhawan, Oscar Chang, Tom Sercu, Cicero dos Santos, Matthew Riemer, Vijil Chenthamarakshan, Inkit Padhi, Aleksandra Mojsilovic

Our model learns a rich latent space of the biological peptide context by taking advantage of abundant, unlabeled peptide sequences.

Neural Network Quine

1 code implementation15 Mar 2018 Oscar Chang, Hod Lipson

We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters.

General Classification Image Classification

Balanced and Deterministic Weight-sharing Helps Network Performance

no code implementations ICLR 2018 Oscar Chang, Hod Lipson

We also present two novel hash functions, the Dirichlet hash and the Neighborhood hash, and use them to demonstrate experimentally that balanced and deterministic weight-sharing helps with the performance of a neural network.

Neural Network Compression

Gradient Normalization & Depth Based Decay For Deep Learning

no code implementations10 Dec 2017 Robert Kwiatkowski, Oscar Chang

In this paper we introduce a novel method of gradient normalization and decay with respect to depth.

General Classification Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.