Search Results for author: Michael Murray

Found 9 papers, 3 papers with code

Benign overfitting in leaky ReLU networks with moderate input dimension

no code implementations11 Mar 2024 Kedar Karhadkar, Erin George, Michael Murray, Guido Montúfar, Deanna Needell

The problem of benign overfitting asks whether it is possible for a model to perfectly fit noisy training data and still generalize well.

Attribute Binary Classification

Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape

no code implementations31 May 2023 Kedar Karhadkar, Michael Murray, Hanna Tseran, Guido Montúfar

We study the loss landscape of both shallow and deep, mildly overparameterized ReLU neural networks on a generic finite input dataset for the squared error loss.

Characterizing the Spectrum of the NTK via a Power Series Expansion

1 code implementation15 Nov 2022 Michael Murray, Hui Jin, Benjamin Bowman, Guido Montufar

We provide expressions for the coefficients of this power series which depend on both the Hermite coefficients of the activation function as well as the depth of the network.

Activation function design for deep networks: linearity and effective initialisation

1 code implementation17 May 2021 Michael Murray, Vinayak Abrol, Jared Tanner

The activation function deployed in a deep neural network has great influence on the performance of the network at initialisation, which in turn has implications for training.

Representation Learning for High-Dimensional Data Collection under Local Differential Privacy

no code implementations23 Oct 2020 Alex Mansbridge, Gregory Barbour, Davide Piras, Michael Murray, Christopher Frye, Ilya Feige, David Barber

In this work, our contributions are two-fold: first, by adapting state-of-the-art techniques from representation learning, we introduce a novel approach to learning LDP mechanisms.

Denoising Representation Learning +1

New opportunities at the photon energy frontier

no code implementations8 Sep 2020 Jaroslav Adam, Christine Aidala, Aaron Angerami, Benjamin Audurier, Carlos Bertulani, Christian Bierlich, Boris Blok, James Daniel Brandenburg, Stanley Brodsky, Aleksandr Bylinkin, Veronica Canoa Roman, Francesco Giovanni Celiberto, Jan Cepila, Grigorios Chachamis, Brian Cole, Guillermo Contreras, David d'Enterria, Adrian Dumitru, Arturo Fernández Téllez, Leonid Frankfurt, Maria Beatriz Gay Ducati, Frank Geurts, Gustavo Gil da Silveira, Francesco Giuli, Victor P. Goncalves, Iwona Grabowska-Bold, Vadim Guzey, Lucian Harland-Lang, Martin Hentschinski, Timothy J. Hobbs, Jamal Jalilian-Marian, Valery A. Khoze, Yongsun Kim, Spencer R. Klein, Simon Knapen, Mariola Kłusek-Gawenda, Michal Krelina, Evgeny Kryshen, Tuomas Lappi, Constantin Loizides, Agnieszka Luszczak, Magno Machado, Heikki Mäntysaari, Daniel Martins, Ronan McNulty, Michael Murray, Jan Nemchik, Jacquelyn Noronha-Hostler, Joakim Nystrand, Alessandro Papa, Bernard Pire, Mateusz Ploskon, Marius Przybycien, John P. Ralston, Patricia Rebello Teles, Christophe Royon, Björn Schenke, William Schmidke, Janet Seger, Anna Stasto, Peter Steinberg, Mark Strikman, Antoni Szczurek, Lech Szymanowski, Daniel Tapia Takaki, Ralf Ulrich, Orlando Villalobos Baillie, Ramona Vogt, Samuel Wallon, Michael Winn, Keping Xie, Zhangbu Xu, Shuai Yang, Mikhail Zhalov, Jian Zhou

Ultra-peripheral collisions (UPCs) involving heavy ions and protons are the energy frontier for photon-mediated interactions.

High Energy Physics - Phenomenology High Energy Physics - Experiment Nuclear Experiment

Encoder blind combinatorial compressed sensing

no code implementations10 Apr 2020 Michael Murray, Jared Tanner

In this paper we consider the problem of designing a decoder to recover a set of sparse codes from their linear measurements alone, that is without access to encoder matrix.

Community Detection Computational Efficiency +1

Vision-and-Dialog Navigation

2 code implementations10 Jul 2019 Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer

To train agents that search an environment for a goal location, we define the Navigation from Dialog History task.

Visual Navigation

Towards an understanding of CNNs: analysing the recovery of activation pathways via Deep Convolutional Sparse Coding

no code implementations26 Jun 2018 Michael Murray, Jared Tanner

Deep Convolutional Sparse Coding (D-CSC) is a framework reminiscent of deep convolutional neural networks (DCNNs), but by omitting the learning of the dictionaries one can more transparently analyse the role of the activation function and its ability to recover activation paths through the layers.

Cannot find the paper you are looking for? You can Submit a new open access paper.