Search Results for author: Anna Golubeva

Found 7 papers, 4 papers with code

Dynamic Sparse Training with Structured Sparsity

1 code implementation3 May 2023 Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, Yani Ioannou

Dynamic Sparse Training (DST) methods achieve state-of-the-art results in sparse neural network training, matching the generalization of dense models while enabling sparse training and inference.

Bounding generalization error with input compression: An empirical study with infinite-width networks

no code implementations19 Jul 2022 Angus Galloway, Anna Golubeva, Mahmoud Salem, Mihai Nica, Yani Ioannou, Graham W. Taylor

Estimating the Generalization Error (GE) of Deep Neural Networks (DNNs) is an important task that often relies on availability of held-out data.

Are wider nets better given the same number of parameters?

2 code implementations ICLR 2021 Anna Golubeva, Behnam Neyshabur, Guy Gur-Ari

Empirical studies demonstrate that the performance of neural networks improves with increasing number of parameters.

Batch Normalization is a Cause of Adversarial Vulnerability

no code implementations6 May 2019 Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, Graham W. Taylor

Batch normalization (batch norm) is often used in an attempt to stabilize and accelerate training in deep neural networks.

QuCumber: wavefunction reconstruction with neural networks

1 code implementation21 Dec 2018 Matthew J. S. Beach, Isaac De Vlugt, Anna Golubeva, Patrick Huembeli, Bohdan Kulchytskyy, Xiuzhe Luo, Roger G. Melko, Ejaaz Merali, Giacomo Torlai

As we enter a new era of quantum technology, it is increasingly important to develop methods to aid in the accurate preparation of quantum states for a variety of materials, matter, and devices.

Quantum Physics Strongly Correlated Electrons

Adversarial Examples as an Input-Fault Tolerance Problem

1 code implementation30 Nov 2018 Angus Galloway, Anna Golubeva, Graham W. Taylor

We analyze the adversarial examples problem in terms of a model's fault tolerance with respect to its input.

valid

A Rate-Distortion Theory of Adversarial Examples

no code implementations27 Sep 2018 Angus Galloway, Anna Golubeva, Graham W. Taylor

The generalization ability of deep neural networks (DNNs) is intertwined with model complexity, robustness, and capacity.

Cannot find the paper you are looking for? You can Submit a new open access paper.