no code implementations • 11 Mar 2024 • Kedar Karhadkar, Erin George, Michael Murray, Guido Montúfar, Deanna Needell
The problem of benign overfitting asks whether it is possible for a model to perfectly fit noisy training data and still generalize well.
no code implementations • 31 May 2023 • Kedar Karhadkar, Michael Murray, Hanna Tseran, Guido Montúfar
We study the loss landscape of both shallow and deep, mildly overparameterized ReLU neural networks on a generic finite input dataset for the squared error loss.
1 code implementation • 15 Nov 2022 • Michael Murray, Hui Jin, Benjamin Bowman, Guido Montufar
We provide expressions for the coefficients of this power series which depend on both the Hermite coefficients of the activation function as well as the depth of the network.
1 code implementation • 17 May 2021 • Michael Murray, Vinayak Abrol, Jared Tanner
The activation function deployed in a deep neural network has great influence on the performance of the network at initialisation, which in turn has implications for training.
no code implementations • 23 Oct 2020 • Alex Mansbridge, Gregory Barbour, Davide Piras, Michael Murray, Christopher Frye, Ilya Feige, David Barber
In this work, our contributions are two-fold: first, by adapting state-of-the-art techniques from representation learning, we introduce a novel approach to learning LDP mechanisms.
no code implementations • 8 Sep 2020 • Jaroslav Adam, Christine Aidala, Aaron Angerami, Benjamin Audurier, Carlos Bertulani, Christian Bierlich, Boris Blok, James Daniel Brandenburg, Stanley Brodsky, Aleksandr Bylinkin, Veronica Canoa Roman, Francesco Giovanni Celiberto, Jan Cepila, Grigorios Chachamis, Brian Cole, Guillermo Contreras, David d'Enterria, Adrian Dumitru, Arturo Fernández Téllez, Leonid Frankfurt, Maria Beatriz Gay Ducati, Frank Geurts, Gustavo Gil da Silveira, Francesco Giuli, Victor P. Goncalves, Iwona Grabowska-Bold, Vadim Guzey, Lucian Harland-Lang, Martin Hentschinski, Timothy J. Hobbs, Jamal Jalilian-Marian, Valery A. Khoze, Yongsun Kim, Spencer R. Klein, Simon Knapen, Mariola Kłusek-Gawenda, Michal Krelina, Evgeny Kryshen, Tuomas Lappi, Constantin Loizides, Agnieszka Luszczak, Magno Machado, Heikki Mäntysaari, Daniel Martins, Ronan McNulty, Michael Murray, Jan Nemchik, Jacquelyn Noronha-Hostler, Joakim Nystrand, Alessandro Papa, Bernard Pire, Mateusz Ploskon, Marius Przybycien, John P. Ralston, Patricia Rebello Teles, Christophe Royon, Björn Schenke, William Schmidke, Janet Seger, Anna Stasto, Peter Steinberg, Mark Strikman, Antoni Szczurek, Lech Szymanowski, Daniel Tapia Takaki, Ralf Ulrich, Orlando Villalobos Baillie, Ramona Vogt, Samuel Wallon, Michael Winn, Keping Xie, Zhangbu Xu, Shuai Yang, Mikhail Zhalov, Jian Zhou
Ultra-peripheral collisions (UPCs) involving heavy ions and protons are the energy frontier for photon-mediated interactions.
High Energy Physics - Phenomenology High Energy Physics - Experiment Nuclear Experiment
no code implementations • 10 Apr 2020 • Michael Murray, Jared Tanner
In this paper we consider the problem of designing a decoder to recover a set of sparse codes from their linear measurements alone, that is without access to encoder matrix.
2 code implementations • 10 Jul 2019 • Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer
To train agents that search an environment for a goal location, we define the Navigation from Dialog History task.
no code implementations • 26 Jun 2018 • Michael Murray, Jared Tanner
Deep Convolutional Sparse Coding (D-CSC) is a framework reminiscent of deep convolutional neural networks (DCNNs), but by omitting the learning of the dictionaries one can more transparently analyse the role of the activation function and its ability to recover activation paths through the layers.