Search Results for author: Daniil Dmitriev

Found 4 papers, 1 papers with code

On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective

no code implementations26 Feb 2024 Daniil Dmitriev, Kristóf Szabó, Amartya Sanyal

In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms.

Asymptotics of Learning with Deep Structured (Random) Features

no code implementations21 Feb 2024 Dominik Schröder, Daniil Dmitriev, Hugo Cui, Bruno Loureiro

For a large class of feature maps we provide a tight asymptotic characterisation of the test error associated with learning the readout layer, in the high-dimensional limit where the input dimension, hidden layer widths, and number of training samples are proportionally large.

Deterministic equivalent and error universality of deep random features learning

1 code implementation1 Feb 2023 Dominik Schröder, Hugo Cui, Daniil Dmitriev, Bruno Loureiro

Establishing this result requires proving a deterministic equivalent for traces of the deep random features sample covariance matrices which can be of independent interest.

Cannot find the paper you are looking for? You can Submit a new open access paper.