1 code implementation • 3 May 2023 • Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, Yani Ioannou
Dynamic Sparse Training (DST) methods achieve state-of-the-art results in sparse neural network training, matching the generalization of dense models while enabling sparse training and inference.
no code implementations • 19 Jul 2022 • Angus Galloway, Anna Golubeva, Mahmoud Salem, Mihai Nica, Yani Ioannou, Graham W. Taylor
Estimating the Generalization Error (GE) of Deep Neural Networks (DNNs) is an important task that often relies on availability of held-out data.
2 code implementations • ICLR 2021 • Anna Golubeva, Behnam Neyshabur, Guy Gur-Ari
Empirical studies demonstrate that the performance of neural networks improves with increasing number of parameters.
no code implementations • 6 May 2019 • Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, Graham W. Taylor
Batch normalization (batch norm) is often used in an attempt to stabilize and accelerate training in deep neural networks.
1 code implementation • 21 Dec 2018 • Matthew J. S. Beach, Isaac De Vlugt, Anna Golubeva, Patrick Huembeli, Bohdan Kulchytskyy, Xiuzhe Luo, Roger G. Melko, Ejaaz Merali, Giacomo Torlai
As we enter a new era of quantum technology, it is increasingly important to develop methods to aid in the accurate preparation of quantum states for a variety of materials, matter, and devices.
Quantum Physics Strongly Correlated Electrons
1 code implementation • 30 Nov 2018 • Angus Galloway, Anna Golubeva, Graham W. Taylor
We analyze the adversarial examples problem in terms of a model's fault tolerance with respect to its input.
no code implementations • 27 Sep 2018 • Angus Galloway, Anna Golubeva, Graham W. Taylor
The generalization ability of deep neural networks (DNNs) is intertwined with model complexity, robustness, and capacity.