Search Results for author: Matti Lassas

Found 11 papers, 2 papers with code

TILT: topological interface recovery in limited-angle tomography

no code implementations25 Oct 2023 Elli Karvonen, Matti Lassas, Pekka Pankka, Samuli Siltanen

A novel reconstruction method is introduced for the severely ill-posed inverse problem of limited-angle tomography.

An Approximation Theory for Metric Space-Valued Functions With A View Towards Deep Learning

no code implementations24 Apr 2023 Anastasis Kratsios, Chong Liu, Matti Lassas, Maarten V. de Hoop, Ivan Dokmanić

Motivated by the developing mathematics of deep learning, we build universal functions approximators of continuous maps between arbitrary Polish metric spaces $\mathcal{X}$ and $\mathcal{Y}$ using elementary functions between Euclidean spaces as building blocks.

Deep Invertible Approximation of Topologically Rich Maps between Manifolds

no code implementations2 Oct 2022 Michael Puthawala, Matti Lassas, Ivan Dokmanic, Pekka Pankka, Maarten de Hoop

By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms, and by using universal approximation arguments from machine learning, we find that a novel network of the form $\mathcal{T} \circ p \circ \mathcal{E}$, where $\mathcal{E}$ is an injective network, $p$ a fixed coordinate projection, and $\mathcal{T}$ a bijective network, is a universal approximator of local diffeomorphisms between compact smooth submanifolds embedded in $\mathbb{R}^n$.

Topological Data Analysis

Universal Joint Approximation of Manifolds and Densities by Simple Injective Flows

no code implementations8 Oct 2021 Michael Puthawala, Matti Lassas, Ivan Dokmanić, Maarten de Hoop

We show that in general, injective flows between $\mathbb{R}^n$ and $\mathbb{R}^m$ universally approximate measures supported on images of extendable embeddings, which are a subset of standard embeddings: when the embedding dimension m is small, topological obstructions may preclude certain manifolds as admissible targets.

Learning the optimal Tikhonov regularizer for inverse problems

1 code implementation NeurIPS 2021 Giovanni S. Alberti, Ernesto de Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria

Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both $x$ and $y$, and one unsupervised, based only on samples of $x$.

Deblurring Denoising +1

Stable reconstruction of simple Riemannian manifolds from unknown interior sources

no code implementations23 Feb 2021 Maarten V. de Hoop, Joonas Ilmavirta, Matti Lassas, Teemu Saksala

If we know all the arrival times at the boundary cylinder of the spacetime, can we reconstruct the space, a Riemannian manifold with boundary?

Differential Geometry Analysis of PDEs Metric Geometry

Gel'fand's inverse problem for the graph Laplacian

no code implementations25 Jan 2021 Emilia Blåsten, Hiroshi Isozaki, Matti Lassas, Jinpeng Lu

Suppose that the set of vertices of the graph is a union of two disjoint sets: $X=B\cup G$, where $B$ is called the set of the boundary vertices and $G$ is called the set of the interior vertices.

Spectral Theory 05C50, 05C22, 52C25

Globally Injective ReLU Networks

no code implementations15 Jun 2020 Michael Puthawala, Konik Kothari, Matti Lassas, Ivan Dokmanić, Maarten de Hoop

Injectivity plays an important role in generative models where it enables inference; in inverse problems and compressed sensing with generative priors it is a precursor to well posedness.

Deep neural networks for inverse problems with pseudodifferential operators: an application to limited-angle tomography

1 code implementation2 Jun 2020 Tatiana A. Bubba, Mathilde Galinier, Matti Lassas, Marco Prato, Luca Ratti, Samuli Siltanen

We propose a novel convolutional neural network (CNN), called $\Psi$DONet, designed for learning pseudodifferential operators ($\Psi$DOs) in the context of linear inverse problems.

Deep learning architectures for nonlinear operator functions and nonlinear inverse problems

no code implementations23 Dec 2019 Maarten V. de Hoop, Matti Lassas, Christopher A. Wong

Lastly, we discuss how operator recurrent networks can be viewed as a deep learning analogue to deterministic algorithms such as boundary control for reconstructing the unknown wavespeed in the acoustic wave equation from boundary measurements.

Cannot find the paper you are looking for? You can Submit a new open access paper.