Search Results for author: Matthew Thorpe

Found 19 papers, 2 papers with code

Manifold learning in Wasserstein space

no code implementations14 Nov 2023 Keaton Hamm, Caroline Moosmüller, Bernhard Schmitzer, Matthew Thorpe

This paper aims at building the theoretical foundations for manifold learning algorithms in the space of absolutely continuous probability measures on a compact and convex subset of $\mathbb{R}^d$, metrized with the Wasserstein-2 distance $W$.

PT$\mathrm{L}^{p}$: Partial Transport $\mathrm{L}^{p}$ Distances

no code implementations25 Jul 2023 Xinran Liu, Yikun Bai, Huy Tran, Zhanqi Zhu, Matthew Thorpe, Soheil Kolouri

In this paper, we introduce partial transport $\mathrm{L}^{p}$ distances as a new family of metrics for comparing generic signals, benefiting from the robustness of partial transport distances.

Rates of Convergence for Regression with the Graph Poly-Laplacian

no code implementations6 Sep 2022 Nicolás García Trillos, Ryan Murray, Matthew Thorpe

In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularisation.

regression

GRAND++: Graph Neural Diffusion with A Source Term

no code implementations ICLR 2022 Matthew Thorpe, Tan Minh Nguyen, Hedi Xia, Thomas Strohmer, Andrea Bertozzi, Stanley Osher, Bao Wang

We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i. e., low-labeling rate.

Graph Learning

Robust Certification for Laplace Learning on Geometric Graphs

no code implementations22 Apr 2021 Matthew Thorpe, Bao Wang

Graph Laplacian (GL)-based semi-supervised learning is one of the most used approaches for classifying nodes in a graph.

Adversarial Attack Adversarial Robustness

The Linearized Hellinger--Kantorovich Distance

1 code implementation17 Feb 2021 Tianji Cai, Junyi Cheng, Bernhard Schmitzer, Matthew Thorpe

Working with the local linearization and the corresponding embeddings allows for the advantages of the Euclidean setting, such as faster computations and a plethora of data analysis tools, whilst still enjoying approximately the descriptive power of the Hellinger--Kantorovich metric.

Optimization and Control

Certifying Robustness of Graph Laplacian Based Semi-Supervised Learning

no code implementations1 Jan 2021 Matthew Thorpe, Bao Wang

Within a certain adversarial perturbation regime, we prove that GL with a $k$-nearest neighbor graph is intrinsically more robust than the $k$-nearest neighbor classifier.

Adversarial Robustness

A Linear Transportation $\mathrm{L}^p$ Distance for Pattern Recognition

no code implementations23 Sep 2020 Oliver M. Crook, Mihai Cucuringu, Tim Hurst, Carola-Bibiane Schönlieb, Matthew Thorpe, Konstantinos C. Zygalakis

The transportation $\mathrm{L}^p$ distance, denoted $\mathrm{TL}^p$, has been proposed as a generalisation of Wasserstein $\mathrm{W}^p$ distances motivated by the property that it can be applied directly to colour or multi-channelled images, as well as multivariate time-series without normalisation or mass constraints.

Time Series Time Series Analysis

Poisson Learning: Graph Based Semi-Supervised Learning At Very Low Label Rates

1 code implementation ICML 2020 Jeff Calder, Brendan Cook, Matthew Thorpe, Dejan Slepcev

We propose a new framework, called Poisson learning, for graph based semi-supervised learning at very low label rates.

Rates of Convergence for Laplacian Semi-Supervised Learning with Low Labeling Rates

no code implementations4 Jun 2020 Jeff Calder, Dejan Slepčev, Matthew Thorpe

The proofs of our well-posedness results use the random walk interpretation of Laplacian learning and PDE arguments, while the proofs of the ill-posedness results use $\Gamma$-convergence tools from the calculus of variations.

From graph cuts to isoperimetric inequalities: Convergence rates of Cheeger cuts on data clouds

no code implementations20 Apr 2020 Nicolas Garcia Trillos, Ryan Murray, Matthew Thorpe

In this work we study statistical properties of graph-based clustering algorithms that rely on the optimization of balanced graph cuts, the main example being the optimization of Cheeger cuts.

Clustering

Representing and Learning High Dimensional Data With the Optimal Transport Map From a Probabilistic Viewpoint

no code implementations CVPR 2018 Serim Park, Matthew Thorpe

With experiments using 4 different datasets, we show that the generative tangent plane model in the optimal transport (OT) manifold can be learned with small numbers of images and can be used to create infinitely many `unseen' images.

Large Data and Zero Noise Limits of Graph-Based Semi-Supervised Learning Algorithms

no code implementations23 May 2018 Matthew M. Dunlop, Dejan Slepčev, Andrew M. Stuart, Matthew Thorpe

Scalings in which the graph Laplacian approaches a differential operator in the large graph limit are used to develop understanding of a number of algorithms for semi-supervised learning; in particular the extension, to this graph setting, of the probit algorithm, level set and kriging methods, are studied.

Analysis of $p$-Laplacian Regularization in Semi-Supervised Learning

no code implementations19 Jul 2017 Dejan Slepčev, Matthew Thorpe

The task is to assign real-valued labels to a set of $n$ sample points, provided a small training subset of $N$ labeled points.

A Transportation $L^p$ Distance for Signal Analysis

no code implementations27 Sep 2016 Matthew Thorpe, Serim Park, Soheil Kolouri, Gustavo K. Rohde, Dejan Slepčev

Transport based distances, such as the Wasserstein distance and earth mover's distance, have been shown to be an effective tool in signal and image analysis.

Cannot find the paper you are looking for? You can Submit a new open access paper.