Search Results for author: David Rolnick

Found 26 papers, 8 papers with code

Maximal Initial Learning Rates in Deep ReLU Networks

no code implementations14 Dec 2022 Gaurav Iyer, Boris Hanin, David Rolnick

Training a neural network requires choosing a suitable learning rate, involving a trade-off between speed and effectiveness of convergence.

PhAST: Physics-Aware, Scalable, and Task-specific GNNs for Accelerated Catalyst Design

no code implementations22 Nov 2022 Alexandre Duval, Victor Schmidt, Santiago Miret, Yoshua Bengio, Alex Hernández-García, David Rolnick

Catalyst materials play a crucial role in the electrochemical reactions involved in a great number of industrial processes key to this transition, such as renewable energy storage and electrofuel synthesis.

Understanding the Evolution of Linear Regions in Deep Reinforcement Learning

no code implementations24 Oct 2022 Setareh Cohan, Nam Hee Kim, David Rolnick, Michiel Van de Panne

Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy.

Continuous Control reinforcement-learning +1

Bugs in the Data: How ImageNet Misrepresents Biodiversity

1 code implementation24 Aug 2022 Alexandra Sasha Luccioni, David Rolnick

We find that many of the classes are ill-defined or overlapping, and that 12% of the images are incorrectly labeled, with some classes having >90% of images incorrect.

Benchmarking Object Detection

Physics-Constrained Deep Learning for Climate Downscaling

1 code implementation8 Aug 2022 Paula Harder, Venkatesh Ramesh, Alex Hernandez-Garcia, Qidong Yang, Prasanna Sattigeri, Daniela Szwarcman, Campbell Watson, David Rolnick

In order to conserve physical quantities, we develop methods that guarantee physical constraints are satisfied by a deep learning downscaling model while also improving their performance according to traditional metrics.

Super-Resolution

On Neural Architecture Inductive Biases for Relational Tasks

1 code implementation9 Jun 2022 Giancarlo Kerg, Sarthak Mittal, David Rolnick, Yoshua Bengio, Blake Richards, Guillaume Lajoie

Recent work has explored how forcing relational representations to remain distinct from sensory representations, as it seems to be the case in the brain, can help artificial systems.

Inductive Bias Out-of-Distribution Generalization

TIML: Task-Informed Meta-Learning for Agriculture

1 code implementation4 Feb 2022 Gabriel Tseng, Hannah Kerner, David Rolnick

When developing algorithms for data-sparse regions, a natural approach is to use transfer learning from data-rich regions.

Meta-Learning Transfer Learning

Techniques for Symbol Grounding with SATNet

1 code implementation NeurIPS 2021 Sever Topan, David Rolnick, Xujie Si

Many experts argue that the future of artificial intelligence is limited by the field's ability to integrate symbolic logical reasoning into deep learning architectures.

Logical Reasoning Visual Reasoning

DC3: A learning method for optimization with hard constraints

1 code implementation ICLR 2021 Priya L. Donti, David Rolnick, J. Zico Kolter

Large optimization problems with hard constraints arise in many settings, yet classical solvers are often prohibitively slow, motivating the use of deep networks as cheap "approximate solvers."

Deep ReLU Networks Preserve Expected Length

no code implementations ICLR 2022 Boris Hanin, Ryan Jeong, David Rolnick

Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize.

Reverse-Engineering Deep ReLU Networks

no code implementations ICML 2020 David Rolnick, Konrad P. Kording

It has been widely assumed that a neural network cannot be recovered from its outputs, as the network depends on its parameters in a highly nonlinear way.

Identifying Weights and Architectures of Unknown ReLU Networks

no code implementations25 Sep 2019 David Rolnick, Konrad P. Kording

The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs.

Deep ReLU Networks Have Surprisingly Few Activation Patterns

no code implementations NeurIPS 2019 Boris Hanin, David Rolnick

The success of deep networks has been attributed in part to their expressivity: per parameter, deep networks can approximate a richer class of functions than shallow networks.

Memorization

Complexity of Linear Regions in Deep Networks

no code implementations25 Jan 2019 Boris Hanin, David Rolnick

It is well-known that the expressivity of a neural network depends on its architecture, with deeper networks expressing more complex functions.

Cross-Classification Clustering: An Efficient Multi-Object Tracking Technique for 3-D Instance Segmentation in Connectomics

no code implementations CVPR 2019 Yaron Meirovitch, Lu Mi, Hayk Saribekyan, Alexander Matveev, David Rolnick, Nir Shavit

Pixel-accurate tracking of objects is a key element in many computer vision applications, often solved by iterated individual object tracking or instance segmentation followed by object matching.

General Classification Instance Segmentation +2

Experience Replay for Continual Learning

no code implementations ICLR 2019 David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P. Lillicrap, Greg Wayne

We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence.

Continual Learning

Measuring and regularizing networks in function space

no code implementations ICLR 2019 Ari S. Benjamin, David Rolnick, Konrad Kording

To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs.

How to Start Training: The Effect of Initialization and Architecture

no code implementations NeurIPS 2018 Boris Hanin, David Rolnick

We identify and study two common failure modes for early training in deep ReLU nets.

Deep Learning is Robust to Massive Label Noise

no code implementations ICLR 2018 David Rolnick, Andreas Veit, Serge Belongie, Nir Shavit

Deep neural networks trained on large supervised datasets have led to impressive results in image classification and other tasks.

Image Classification

The power of deeper networks for expressing natural functions

no code implementations ICLR 2018 David Rolnick, Max Tegmark

It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones.

Why does deep and cheap learning work so well?

no code implementations29 Aug 2016 Henry W. Lin, Max Tegmark, David Rolnick

We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones.

Cannot find the paper you are looking for? You can Submit a new open access paper.