no code implementations • 25 Oct 2023 • Elli Karvonen, Matti Lassas, Pekka Pankka, Samuli Siltanen
A novel reconstruction method is introduced for the severely ill-posed inverse problem of limited-angle tomography.
no code implementations • 24 Apr 2023 • Anastasis Kratsios, Chong Liu, Matti Lassas, Maarten V. de Hoop, Ivan Dokmanić
Motivated by the developing mathematics of deep learning, we build universal functions approximators of continuous maps between arbitrary Polish metric spaces $\mathcal{X}$ and $\mathcal{Y}$ using elementary functions between Euclidean spaces as building blocks.
no code implementations • 2 Oct 2022 • Michael Puthawala, Matti Lassas, Ivan Dokmanic, Pekka Pankka, Maarten de Hoop
By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms, and by using universal approximation arguments from machine learning, we find that a novel network of the form $\mathcal{T} \circ p \circ \mathcal{E}$, where $\mathcal{E}$ is an injective network, $p$ a fixed coordinate projection, and $\mathcal{T}$ a bijective network, is a universal approximator of local diffeomorphisms between compact smooth submanifolds embedded in $\mathbb{R}^n$.
no code implementations • 31 Dec 2021 • Siiri Rautio, Rashmi Murthy, Tatiana A. Bubba, Matti Lassas, Samuli Siltanen
Limited-angle tomography is a highly ill-posed linear inverse problem.
no code implementations • 8 Oct 2021 • Michael Puthawala, Matti Lassas, Ivan Dokmanić, Maarten de Hoop
We show that in general, injective flows between $\mathbb{R}^n$ and $\mathbb{R}^m$ universally approximate measures supported on images of extendable embeddings, which are a subset of standard embeddings: when the embedding dimension m is small, topological obstructions may preclude certain manifolds as admissible targets.
1 code implementation • NeurIPS 2021 • Giovanni S. Alberti, Ernesto de Vito, Matti Lassas, Luca Ratti, Matteo Santacesaria
Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both $x$ and $y$, and one unsupervised, based only on samples of $x$.
no code implementations • 23 Feb 2021 • Maarten V. de Hoop, Joonas Ilmavirta, Matti Lassas, Teemu Saksala
If we know all the arrival times at the boundary cylinder of the spacetime, can we reconstruct the space, a Riemannian manifold with boundary?
Differential Geometry Analysis of PDEs Metric Geometry
no code implementations • 25 Jan 2021 • Emilia Blåsten, Hiroshi Isozaki, Matti Lassas, Jinpeng Lu
Suppose that the set of vertices of the graph is a union of two disjoint sets: $X=B\cup G$, where $B$ is called the set of the boundary vertices and $G$ is called the set of the interior vertices.
Spectral Theory 05C50, 05C22, 52C25
no code implementations • 15 Jun 2020 • Michael Puthawala, Konik Kothari, Matti Lassas, Ivan Dokmanić, Maarten de Hoop
Injectivity plays an important role in generative models where it enables inference; in inverse problems and compressed sensing with generative priors it is a precursor to well posedness.
1 code implementation • 2 Jun 2020 • Tatiana A. Bubba, Mathilde Galinier, Matti Lassas, Marco Prato, Luca Ratti, Samuli Siltanen
We propose a novel convolutional neural network (CNN), called $\Psi$DONet, designed for learning pseudodifferential operators ($\Psi$DOs) in the context of linear inverse problems.
no code implementations • 23 Dec 2019 • Maarten V. de Hoop, Matti Lassas, Christopher A. Wong
Lastly, we discuss how operator recurrent networks can be viewed as a deep learning analogue to deterministic algorithms such as boundary control for reconstructing the unknown wavespeed in the acoustic wave equation from boundary measurements.