no code implementations • 9 Nov 2023 • Kevin Miller, Ryan Murray
This work introduces Dirichlet Active Learning (DiAL), a Bayesian-inspired approach to the design of active learning algorithms.
no code implementations • 31 Oct 2023 • Lorenzo Luzi, Helen Jenne, Ryan Murray, Carlos Ortiz Marrero
The rapid advancement of Generative Adversarial Networks (GANs) necessitates the need to robustly evaluate these models.
no code implementations • 6 Sep 2022 • Nicolás García Trillos, Ryan Murray, Matthew Thorpe
In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularisation.
no code implementations • 14 Jan 2022 • Martin Molina-Fructuoso, Ryan Murray
Statistical depths provide a fundamental generalization of quantiles and medians to data in higher dimensions.
no code implementations • 26 Nov 2021 • Leon Bungert, Nicolás García Trillos, Ryan Murray
We establish an equivalence between a family of adversarial training problems for non-parametric binary classification and a family of regularized risk minimization problems where the regularizer is a nonlocal perimeter functional.
no code implementations • 4 Apr 2021 • Martin Molina-Fructuoso, Ryan Murray
We prove that this equation possesses a unique viscosity solution and that this solution always bounds the Tukey depth from below.
no code implementations • 21 Nov 2020 • Nicolas Garcia Trillos, Ryan Murray
Using the necessary conditions, we derive a geometric evolution equation which can be used to track the change in classification boundaries as $\varepsilon$ varies.
no code implementations • 20 Apr 2020 • Nicolas Garcia Trillos, Ryan Murray, Matthew Thorpe
In this work we study statistical properties of graph-based clustering algorithms that rely on the optimization of balanced graph cuts, the main example being the optimization of Cheeger cuts.
no code implementations • 5 Mar 2020 • Brian Swenson, Ryan Murray, Soummya Kar, H. Vincent Poor
In centralized settings, it is well known that stochastic gradient descent (SGD) avoids saddle points and converges to local minima in nonconvex problems.
Optimization and Control
no code implementations • 29 Jan 2019 • Nicolas Garcia Trillos, Ryan Murray
This paper investigates the use of methods from partial differential equations and the Calculus of variations to study learning problems that are regularized using graph Laplacians.
no code implementations • 1 Jul 2016 • Nicolas Garcia Trillos, Ryan Murray
This work considers the problem of binary classification: given training data $x_1, \dots, x_n$ from a certain population, together with associated labels $y_1,\dots, y_n \in \left\{0, 1 \right\}$, determine the best label for an element $x$ not among the training data.