Search Results for author: David Rolnick

Found 20 papers, 5 papers with code

TIML: Task-Informed Meta-Learning for Agriculture

1 code implementation4 Feb 2022 Gabriel Tseng, Hannah Kerner, David Rolnick

When developing algorithms for data-sparse regions, a natural approach is to use transfer learning from data-rich regions.

Meta-Learning Transfer Learning

Techniques for Symbol Grounding with SATNet

1 code implementation NeurIPS 2021 Sever Topan, David Rolnick, Xujie Si

Many experts argue that the future of artificial intelligence is limited by the field's ability to integrate symbolic logical reasoning into deep learning architectures.

Visual Reasoning

DC3: A learning method for optimization with hard constraints

1 code implementation ICLR 2021 Priya L. Donti, David Rolnick, J. Zico Kolter

Large optimization problems with hard constraints arise in many settings, yet classical solvers are often prohibitively slow, motivating the use of deep networks as cheap "approximate solvers."

Deep ReLU Networks Preserve Expected Length

no code implementations ICLR 2022 Boris Hanin, Ryan Jeong, David Rolnick

Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize.

Reverse-Engineering Deep ReLU Networks

no code implementations ICML 2020 David Rolnick, Konrad P. Kording

It has been widely assumed that a neural network cannot be recovered from its outputs, as the network depends on its parameters in a highly nonlinear way.

Identifying Weights and Architectures of Unknown ReLU Networks

no code implementations25 Sep 2019 David Rolnick, Konrad P. Kording

The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs.

Deep ReLU Networks Have Surprisingly Few Activation Patterns

no code implementations NeurIPS 2019 Boris Hanin, David Rolnick

The success of deep networks has been attributed in part to their expressivity: per parameter, deep networks can approximate a richer class of functions than shallow networks.

Complexity of Linear Regions in Deep Networks

no code implementations25 Jan 2019 Boris Hanin, David Rolnick

It is well-known that the expressivity of a neural network depends on its architecture, with deeper networks expressing more complex functions.

Cross-Classification Clustering: An Efficient Multi-Object Tracking Technique for 3-D Instance Segmentation in Connectomics

no code implementations CVPR 2019 Yaron Meirovitch, Lu Mi, Hayk Saribekyan, Alexander Matveev, David Rolnick, Nir Shavit

Pixel-accurate tracking of objects is a key element in many computer vision applications, often solved by iterated individual object tracking or instance segmentation followed by object matching.

General Classification Instance Segmentation +2

Experience Replay for Continual Learning

no code implementations ICLR 2019 David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P. Lillicrap, Greg Wayne

We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence.

Continual Learning

Measuring and regularizing networks in function space

no code implementations ICLR 2019 Ari S. Benjamin, David Rolnick, Konrad Kording

To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs.

How to Start Training: The Effect of Initialization and Architecture

no code implementations NeurIPS 2018 Boris Hanin, David Rolnick

We identify and study two common failure modes for early training in deep ReLU nets.

Deep Learning is Robust to Massive Label Noise

no code implementations ICLR 2018 David Rolnick, Andreas Veit, Serge Belongie, Nir Shavit

Deep neural networks trained on large supervised datasets have led to impressive results in image classification and other tasks.

Image Classification

The power of deeper networks for expressing natural functions

no code implementations ICLR 2018 David Rolnick, Max Tegmark

It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones.

Why does deep and cheap learning work so well?

no code implementations29 Aug 2016 Henry W. Lin, Max Tegmark, David Rolnick

We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones.

Cannot find the paper you are looking for? You can Submit a new open access paper.