no code implementations • 28 Mar 2023 • Stefanos Eleftheriadis, Dominic Richards, James Hensman
Further, we introduce sparseness in the eigenbasis by variational learning of the spherical harmonic phases.
no code implementations • 27 May 2022 • Benedict Oakes, Dominic Richards, Jordi Barr, Jason F. Ralph
In this paper, we demonstrate the use of reinforcement learning to develop a sensor management policy for SSA.
1 code implementation • 26 Aug 2021 • Dominic Richards, Edgar Dobriban, Patrick Rebeschini
Methods for learning from data depend on various types of tuning parameters, such as penalization strength or step size.
no code implementations • NeurIPS 2021 • Dominic Richards, Ilja Kuzborskij
We revisit on-average algorithmic stability of GD for training overparameterised shallow neural networks and prove new generalisation and excess risk bounds without the NTK or PL assumptions.
no code implementations • 13 Jan 2021 • Dominic Richards, Mike Rabbat
Out of sample guarantees are then achieved by decomposing the test error into generalisation, optimisation and approximation errors, each of which can be bounded and traded off with respect to algorithmic parameters, sample size and magnitude of this eigenvalue.
1 code implementation • ICML 2020 • Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco
Under standard source and capacity assumptions, we establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.
no code implementations • 11 Jun 2020 • Dominic Richards, Jaouad Mourtada, Lorenzo Rosasco
We analyze the prediction error of ridge regression in an asymptotic regime where the sample size and dimension go to infinity at a proportional rate.
no code implementations • NeurIPS 2021 • Dominic Richards, Sahand N. Negahban, Patrick Rebeschini
Motivated by distributed machine learning settings such as Federated Learning, we consider the problem of fitting a statistical model across a distributed collection of heterogeneous data sets whose similarity structure is encoded by a graph topology.
no code implementations • NeurIPS 2019 • Dominic Richards, Patrick Rebeschini
We show that if agents hold sufficiently many samples with respect to the network size, then Distributed Gradient Descent achieves optimal statistical rates with a number of iterations that scales, up to a threshold, with the inverse of the spectral gap of the gossip matrix divided by the number of samples owned by each agent raised to a problem-dependent power.
no code implementations • 18 Sep 2018 • Dominic Richards, Patrick Rebeschini
We propose graph-dependent implicit regularisation strategies for distributed stochastic subgradient descent (Distributed SGD) for convex problems in multi-agent learning.