1 code implementation • 28 Apr 2023 • Marc Finzi, Andres Potapczynski, Matthew Choptuik, Andrew Gordon Wilson
Unlike conventional grid and mesh based methods for solving partial differential equations (PDEs), neural networks have the potential to break the curse of dimensionality, providing approximate solutions to problems where using classical solvers is difficult or impossible.
1 code implementation • 11 Apr 2023 • Micah Goldblum, Marc Finzi, Keefer Rowan, Andrew Gordon Wilson
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems.
1 code implementation • 24 Nov 2022 • Sanae Lotfi, Marc Finzi, Sanyam Kapoor, Andres Potapczynski, Micah Goldblum, Andrew Gordon Wilson
While there has been progress in developing non-vacuous generalization bounds for deep neural networks, these bounds tend to be uninformative about why deep learning works.
1 code implementation • 6 Oct 2022 • Nate Gruver, Marc Finzi, Micah Goldblum, Andrew Gordon Wilson
In order to better understand the role of equivariance in recent vision models, we introduce the Lie derivative, a method for measuring equivariance with strong mathematical foundations and minimal hyperparameters.
1 code implementation • ICLR 2022 • Nate Gruver, Marc Finzi, Samuel Stanton, Andrew Gordon Wilson
Physics-inspired neural networks (NNs), such as Hamiltonian or Lagrangian NNs, dramatically outperform other learned dynamics models by leveraging strong inductive biases.
1 code implementation • NeurIPS 2021 • Marc Finzi, Gregory Benton, Andrew Gordon Wilson
There is often a trade-off between building deep learning systems that are expressive enough to capture the nuances of the reality, and having the right inductive biases for efficient learning.
1 code implementation • 12 Jun 2021 • Sanyam Kapoor, Marc Finzi, Ke Alexander Wang, Andrew Gordon Wilson
State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the covariance kernel.
4 code implementations • 19 Apr 2021 • Marc Finzi, Max Welling, Andrew Gordon Wilson
Symmetries and equivariance are fundamental to the generalization of neural networks on domains such as images, graphs, and point clouds.
no code implementations • NeurIPS 2020 • Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson
Invariances to translations have imbued convolutional neural networks with powerful generalization properties.
1 code implementation • NeurIPS 2020 • Marc Finzi, Ke Alexander Wang, Andrew Gordon Wilson
Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics.
1 code implementation • 22 Oct 2020 • Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson
Invariances to translations have imbued convolutional neural networks with powerful generalization properties.
1 code implementation • ICLR 2021 • Marc Finzi, Roberto Bondesan, Max Welling
Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods.
2 code implementations • ICML 2020 • Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson
The translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems.
2 code implementations • ICML 2020 • Pavel Izmailov, Polina Kirichenko, Marc Finzi, Andrew Gordon Wilson
Normalizing flows transform a latent distribution through an invertible neural network for a flexible and pleasingly simple approach to generative modelling, while preserving an exact likelihood.
Semi-Supervised Image Classification
Semi-Supervised Text Classification
2 code implementations • ICLR 2019 • Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson
Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters.