Search Results for author: Daniel Beaglehole

Found 6 papers, 2 papers with code

Average gradient outer product as a mechanism for deep neural collapse

no code implementations21 Feb 2024 Daniel Beaglehole, Peter Súkeník, Marco Mondelli, Mikhail Belkin

In this work, we provide substantial evidence that DNC formation occurs primarily through deep feature learning with the average gradient outer product (AGOP).

Gradient descent induces alignment between weights and the empirical NTK for deep non-linear networks

no code implementations7 Feb 2024 Daniel Beaglehole, Ioannis Mitliagkas, Atish Agarwala

Prior works have identified that the gram matrices of the weights in trained neural networks of general architectures are proportional to the average gradient outer product of the model, in a statement known as the Neural Feature Ansatz (NFA).

Mechanism of feature learning in convolutional neural networks

1 code implementation1 Sep 2023 Daniel Beaglehole, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin

We then demonstrate the generality of our result by using the patch-based AGOP to enable deep feature learning in convolutional kernel machines.

On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions

no code implementations26 May 2022 Daniel Beaglehole, Mikhail Belkin, Parthe Pandit

``Benign overfitting'', the ability of certain algorithms to interpolate noisy training data and yet perform well out-of-sample, has been a topic of considerable recent interest.

regression Translation

Learning to Hash Robustly, Guaranteed

no code implementations11 Aug 2021 Alexandr Andoni, Daniel Beaglehole

In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms, while optimizing the hashing to the structure of the dataset (think instance-optimal algorithms) for performance on the minimum-performing query.

Cannot find the paper you are looking for? You can Submit a new open access paper.