Search Results for author: Dominic Richards

Found 10 papers, 2 papers with code

Sparse Gaussian Processes with Spherical Harmonic Features Revisited

no code implementations28 Mar 2023 Stefanos Eleftheriadis, Dominic Richards, James Hensman

Further, we introduce sparseness in the eigenbasis by variational learning of the spherical harmonic phases.

Gaussian Processes

Comparing Classes of Estimators: When does Gradient Descent Beat Ridge Regression in Linear Models?

1 code implementation26 Aug 2021 Dominic Richards, Edgar Dobriban, Patrick Rebeschini

Methods for learning from data depend on various types of tuning parameters, such as penalization strength or step size.

regression Unity

Stability & Generalisation of Gradient Descent for Shallow Neural Networks without the Neural Tangent Kernel

no code implementations NeurIPS 2021 Dominic Richards, Ilja Kuzborskij

We revisit on-average algorithmic stability of GD for training overparameterised shallow neural networks and prove new generalisation and excess risk bounds without the NTK or PL assumptions.

Learning with Gradient Descent and Weakly Convex Losses

no code implementations13 Jan 2021 Dominic Richards, Mike Rabbat

Out of sample guarantees are then achieved by decomposing the test error into generalisation, optimisation and approximation errors, each of which can be bounded and traded off with respect to algorithmic parameters, sample size and magnitude of this eigenvalue.

Decentralised Learning with Random Features and Distributed Gradient Descent

1 code implementation ICML 2020 Dominic Richards, Patrick Rebeschini, Lorenzo Rosasco

Under standard source and capacity assumptions, we establish high probability bounds on the predictive performance for each agent as a function of the step size, number of iterations, inverse spectral gap of the communication matrix and number of Random Features.

Asymptotics of Ridge (less) Regression under General Source Condition

no code implementations11 Jun 2020 Dominic Richards, Jaouad Mourtada, Lorenzo Rosasco

We analyze the prediction error of ridge regression in an asymptotic regime where the sample size and dimension go to infinity at a proportional rate.

regression

Distributed Machine Learning with Sparse Heterogeneous Data

no code implementations NeurIPS 2021 Dominic Richards, Sahand N. Negahban, Patrick Rebeschini

Motivated by distributed machine learning settings such as Federated Learning, we consider the problem of fitting a statistical model across a distributed collection of heterogeneous data sets whose similarity structure is encoded by a graph topology.

BIG-bench Machine Learning Denoising +2

Optimal Statistical Rates for Decentralised Non-Parametric Regression with Linear Speed-Up

no code implementations NeurIPS 2019 Dominic Richards, Patrick Rebeschini

We show that if agents hold sufficiently many samples with respect to the network size, then Distributed Gradient Descent achieves optimal statistical rates with a number of iterations that scales, up to a threshold, with the inverse of the spectral gap of the gossip matrix divided by the number of samples owned by each agent raised to a problem-dependent power.

regression

Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent

no code implementations18 Sep 2018 Dominic Richards, Patrick Rebeschini

We propose graph-dependent implicit regularisation strategies for distributed stochastic subgradient descent (Distributed SGD) for convex problems in multi-agent learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.