Search Results for author: David Rolnick

Found 41 papers, 15 papers with code

Predicting species occurrence patterns from partial observations

no code implementations26 Mar 2024 Hager Radi Abdelwahed, Mélisande Teng, David Rolnick

To address this task, we propose a general model, R-Tran, for predicting species occurrence patterns that enables the use of partial observational data wherever found.

Application-Driven Innovation in Machine Learning

no code implementations26 Mar 2024 David Rolnick, Alan Aspuru-Guzik, Sara Beery, Bistra Dilkina, Priya L. Donti, Marzyeh Ghassemi, Hannah Kerner, Claire Monteleoni, Esther Rolf, Milind Tambe, Adam White

As applications of machine learning proliferate, innovative algorithms inspired by specific real-world challenges have become increasingly important.

Dataset Difficulty and the Role of Inductive Bias

no code implementations3 Jan 2024 Devin Kwok, Nikhil Anand, Jonathan Frankle, Gintare Karolina Dziugaite, David Rolnick

Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examples within a dataset.

Inductive Bias

FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models

no code implementations15 Dec 2023 Nikolaos Ioannis Bountos, Arthur Ouaknine, David Rolnick

Inspired by the rise of foundation models for computer vision and remote sensing, we here present the first unified Forest Monitoring Benchmark (FoMo-Bench).

object-detection Object Detection

Towards Causal Representations of Climate Model Data

no code implementations5 Dec 2023 Julien Boussard, Chandni Nagda, Julia Kaltenborn, Charlotte Emilie Elektra Lange, Philippe Brouillard, Yaniv Gurwicz, Peer Nowack, David Rolnick

Climate models, such as Earth system models (ESMs), are crucial for simulating future climate change based on projected Shared Socioeconomic Pathways (SSP) greenhouse gas emissions scenarios.

Causal Discovery Representation Learning

SatBird: Bird Species Distribution Modeling with Remote Sensing and Citizen Science Data

1 code implementation2 Nov 2023 Mélisande Teng, Amna Elmustafa, Benjamin Akera, Yoshua Bengio, Hager Radi Abdelwahed, Hugo Larochelle, David Rolnick

The wide availability of remote sensing data and the growing adoption of citizen science tools to collect species observations data at low cost offer an opportunity for improving biodiversity monitoring and enabling the modelling of complex ecosystems.

OpenForest: A data catalogue for machine learning in forest monitoring

1 code implementation1 Nov 2023 Arthur Ouaknine, Teja Kattenborn, Etienne Laliberté, David Rolnick

These datasets are grouped in OpenForest, a dynamic catalogue open to contributions that strives to reference all available open access forest datasets.

On the importance of catalyst-adsorbate 3D interactions for relaxed energy predictions

no code implementations10 Oct 2023 Alvaro Carbonero, Alexandre Duval, Victor Schmidt, Santiago Miret, Alex Hernandez-Garcia, Yoshua Bengio, David Rolnick

The use of machine learning for material property prediction and discovery has traditionally centered on graph neural networks that incorporate the geometric configuration of all atoms.

Property Prediction

Multi-variable Hard Physical Constraints for Climate Model Downscaling

no code implementations2 Aug 2023 Jose González-Abad, Álex Hernández-García, Paula Harder, David Rolnick, José Manuel Gutiérrez

Global Climate Models (GCMs) are the primary tool to simulate climate evolution and assess the impacts of climate change.

Hidden symmetries of ReLU networks

no code implementations9 Jun 2023 J. Elisenda Grigsby, Kathryn Lindsey, David Rolnick

The parameter space for any fixed architecture of feedforward ReLU neural networks serves as a proxy during training for the associated class of functions - but how faithful is this representation?

Normalization Layers Are All That Sharpness-Aware Minimization Needs

1 code implementation NeurIPS 2023 Maximilian Mueller, Tiffany Vlaar, David Rolnick, Matthias Hein

Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima and has been shown to enhance generalization performance in various settings.

Bird Distribution Modelling using Remote Sensing and Citizen Science data

no code implementations1 May 2023 Mélisande Teng, Amna Elmustafa, Benjamin Akera, Hugo Larochelle, David Rolnick

Climate change is a major driver of biodiversity loss, changing the geographic range and abundance of many species.

FAENet: Frame Averaging Equivariant GNN for Materials Modeling

1 code implementation28 Apr 2023 Alexandre Duval, Victor Schmidt, Alex Hernandez Garcia, Santiago Miret, Fragkiskos D. Malliaros, Yoshua Bengio, David Rolnick

Applications of machine learning techniques for materials modeling typically involve functions known to be equivariant or invariant to specific symmetries.

Lightweight, Pre-trained Transformers for Remote Sensing Timeseries

1 code implementation27 Apr 2023 Gabriel Tseng, Ruben Cartuyvels, Ivan Zvonkov, Mirali Purohit, David Rolnick, Hannah Kerner

Machine learning methods for satellite data have a range of societally relevant applications, but labels used to train models can be difficult or impossible to acquire.

Crop Classification Self-Supervised Learning +1

Maximal Initial Learning Rates in Deep ReLU Networks

no code implementations14 Dec 2022 Gaurav Iyer, Boris Hanin, David Rolnick

Training a neural network requires choosing a suitable learning rate, which involves a trade-off between speed and effectiveness of convergence.

PhAST: Physics-Aware, Scalable, and Task-specific GNNs for Accelerated Catalyst Design

2 code implementations22 Nov 2022 Alexandre Duval, Victor Schmidt, Santiago Miret, Yoshua Bengio, Alex Hernández-García, David Rolnick

Catalyst materials play a crucial role in the electrochemical reactions involved in numerous industrial processes key to this transition, such as renewable energy storage and electrofuel synthesis.

Computational Efficiency

Understanding the Evolution of Linear Regions in Deep Reinforcement Learning

1 code implementation24 Oct 2022 Setareh Cohan, Nam Hee Kim, David Rolnick, Michiel Van de Panne

Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy.

Continuous Control reinforcement-learning +1

Bugs in the Data: How ImageNet Misrepresents Biodiversity

1 code implementation24 Aug 2022 Alexandra Sasha Luccioni, David Rolnick

We find that many of the classes are ill-defined or overlapping, and that 12% of the images are incorrectly labeled, with some classes having >90% of images incorrect.

Benchmarking Object Detection

Hard-Constrained Deep Learning for Climate Downscaling

1 code implementation8 Aug 2022 Paula Harder, Alex Hernandez-Garcia, Venkatesh Ramesh, Qidong Yang, Prasanna Sattigeri, Daniela Szwarcman, Campbell Watson, David Rolnick

In order to conserve physical quantities, here we introduce methods that guarantee statistical constraints are satisfied by a deep learning downscaling model, while also improving their performance according to traditional metrics.

Super-Resolution

On Neural Architecture Inductive Biases for Relational Tasks

1 code implementation9 Jun 2022 Giancarlo Kerg, Sarthak Mittal, David Rolnick, Yoshua Bengio, Blake Richards, Guillaume Lajoie

Recent work has explored how forcing relational representations to remain distinct from sensory representations, as it seems to be the case in the brain, can help artificial systems.

Inductive Bias Out-of-Distribution Generalization

TIML: Task-Informed Meta-Learning for Agriculture

1 code implementation4 Feb 2022 Gabriel Tseng, Hannah Kerner, David Rolnick

When developing algorithms for data-sparse regions, a natural approach is to use transfer learning from data-rich regions.

Meta-Learning Transfer Learning

Techniques for Symbol Grounding with SATNet

1 code implementation NeurIPS 2021 Sever Topan, David Rolnick, Xujie Si

Many experts argue that the future of artificial intelligence is limited by the field's ability to integrate symbolic logical reasoning into deep learning architectures.

Logical Reasoning Visual Reasoning

DC3: A learning method for optimization with hard constraints

1 code implementation ICLR 2021 Priya L. Donti, David Rolnick, J. Zico Kolter

Large optimization problems with hard constraints arise in many settings, yet classical solvers are often prohibitively slow, motivating the use of deep networks as cheap "approximate solvers."

Deep ReLU Networks Preserve Expected Length

no code implementations ICLR 2022 Boris Hanin, Ryan Jeong, David Rolnick

Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize.

Reverse-Engineering Deep ReLU Networks

no code implementations ICML 2020 David Rolnick, Konrad P. Kording

It has been widely assumed that a neural network cannot be recovered from its outputs, as the network depends on its parameters in a highly nonlinear way.

Identifying Weights and Architectures of Unknown ReLU Networks

no code implementations25 Sep 2019 David Rolnick, Konrad P. Kording

The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs.

Deep ReLU Networks Have Surprisingly Few Activation Patterns

no code implementations NeurIPS 2019 Boris Hanin, David Rolnick

The success of deep networks has been attributed in part to their expressivity: per parameter, deep networks can approximate a richer class of functions than shallow networks.

Memorization

Complexity of Linear Regions in Deep Networks

no code implementations25 Jan 2019 Boris Hanin, David Rolnick

It is well-known that the expressivity of a neural network depends on its architecture, with deeper networks expressing more complex functions.

Cross-Classification Clustering: An Efficient Multi-Object Tracking Technique for 3-D Instance Segmentation in Connectomics

no code implementations CVPR 2019 Yaron Meirovitch, Lu Mi, Hayk Saribekyan, Alexander Matveev, David Rolnick, Nir Shavit

Pixel-accurate tracking of objects is a key element in many computer vision applications, often solved by iterated individual object tracking or instance segmentation followed by object matching.

Clustering General Classification +4

Experience Replay for Continual Learning

no code implementations ICLR 2019 David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P. Lillicrap, Greg Wayne

We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence.

Continual Learning

Measuring and regularizing networks in function space

no code implementations ICLR 2019 Ari S. Benjamin, David Rolnick, Konrad Kording

To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs.

How to Start Training: The Effect of Initialization and Architecture

no code implementations NeurIPS 2018 Boris Hanin, David Rolnick

We identify and study two common failure modes for early training in deep ReLU nets.

Deep Learning is Robust to Massive Label Noise

no code implementations ICLR 2018 David Rolnick, Andreas Veit, Serge Belongie, Nir Shavit

Deep neural networks trained on large supervised datasets have led to impressive results in image classification and other tasks.

Image Classification

The power of deeper networks for expressing natural functions

no code implementations ICLR 2018 David Rolnick, Max Tegmark

It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones.

Why does deep and cheap learning work so well?

no code implementations29 Aug 2016 Henry W. Lin, Max Tegmark, David Rolnick

We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through "cheap learning" with exponentially fewer parameters than generic ones.

Cannot find the paper you are looking for? You can Submit a new open access paper.