Search Results for author: Hannah Lawrence

Found 12 papers, 6 papers with code

Equivariant Frames and the Impossibility of Continuous Canonicalization

no code implementations25 Feb 2024 Nadav Dym, Hannah Lawrence, Jonathan W. Siegel

Canonicalization provides an architecture-agnostic method for enforcing equivariance, with generalizations such as frame-averaging recently gaining prominence as a lightweight and flexible alternative to equivariant architectures.

On the hardness of learning under symmetries

no code implementations3 Jan 2024 Bobak T. Kiani, Thien Le, Hannah Lawrence, Stefanie Jegelka, Melanie Weber

We study the problem of learning equivariant neural networks via gradient descent.

Inductive Bias

Learning Polynomial Problems with $SL(2,\mathbb{R})$ Equivariance

no code implementations4 Dec 2023 Hannah Lawrence, Mitchell Tong Harris

Moreover, we observe that these polynomial learning problems are equivariant to the non-compact group $SL(2,\mathbb{R})$, which consists of area-preserving linear transformations.

Data Augmentation

Self-Supervised Learning with Lie Symmetries for Partial Differential Equations

1 code implementation NeurIPS 2023 Grégoire Mialon, Quentin Garrido, Hannah Lawrence, Danyal Rehman, Yann Lecun, Bobak T. Kiani

Machine learning for differential equations paves the way for computationally efficient alternatives to numerical solvers, with potentially broad impacts in science and engineering.

Representation Learning Self-Supervised Learning

GULP: a prediction-based metric between representations

1 code implementation12 Oct 2022 Enric Boix-Adsera, Hannah Lawrence, George Stepaniants, Philippe Rigollet

Comparing the representations learned by different neural networks has recently emerged as a key tool to understand various architectures and ultimately optimize them.

Distilling Model Failures as Directions in Latent Space

1 code implementation29 Jun 2022 Saachi Jain, Hannah Lawrence, Ankur Moitra, Aleksander Madry

Moreover, by combining our framework with off-the-shelf diffusion models, we can generate images that are especially challenging for the analyzed model, and thus can be used to perform synthetic data augmentation that helps remedy the model's failure modes.

Data Augmentation

Implicit Bias of Linear Equivariant Networks

1 code implementation12 Oct 2021 Hannah Lawrence, Kristian Georgiev, Andrew Dienes, Bobak T. Kiani

Group equivariant convolutional neural networks (G-CNNs) are generalizations of convolutional neural networks (CNNs) which excel in a wide range of technical applications by explicitly encoding symmetries, such as rotations and permutations, in their architectures.

Binary Classification

Dictionary Learning Under Generative Coefficient Priors with Applications to Compression

no code implementations29 Sep 2021 Hannah Lawrence, Ankur Moitra

There is a rich literature on recovering data from limited measurements under the assumption of sparsity in some basis, whether known (compressed sensing) or unknown (dictionary learning).

Denoising Dictionary Learning +2

Practical Phase Retrieval: Low-Photon Holography with Untrained Priors

no code implementations1 Jan 2021 Hannah Lawrence, David Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié

To the best of our knowledge, this is the first work to consider a dataset-free machine learning approach for holographic phase retrieval.

Retrieval

Phase Retrieval with Holography and Untrained Priors: Tackling the Challenges of Low-Photon Nanoscale Imaging

1 code implementation14 Dec 2020 Hannah Lawrence, David A. Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié

Phase retrieval is the inverse problem of recovering a signal from magnitude-only Fourier measurements, and underlies numerous imaging modalities, such as Coherent Diffraction Imaging (CDI).

Retrieval

Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition

no code implementations NeurIPS 2020 Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi

To establish the dimension-independent upper bound, we next show that a mini-batching algorithm provides an $ O(\frac{T}{\sqrt{K}}) $ upper bound, and therefore conclude that the minimax regret of switching-constrained OCO is $ \Theta(\frac{T}{\sqrt{K}}) $ for any $K$.

2k

Cannot find the paper you are looking for? You can Submit a new open access paper.