no code implementations • 26 Jan 2023 • Christian Koke, Gitta Kutyniok
This work develops a flexible and mathematically sound framework for the design and analysis of graph scattering networks with variable branching ratios and generic functional calculus filters.
no code implementations • 15 Jan 2023 • Yunseok Lee, Holger Boche, Gitta Kutyniok
Optimization problems are a staple of today's scientific and technical landscape.
no code implementations • 22 Nov 2022 • Stefan Kolek, Robert Windesheim, Hector Andrade Loarca, Gitta Kutyniok, Ron Levie
However, the smoothness of a mask limits its ability to separate fine-detail patterns, that are relevant for the classifier, from nearby nuisance patterns, that do not affect the classifier.
no code implementations • 18 Nov 2022 • Çağkan Yapar, Ron Levie, Gitta Kutyniok, Giuseppe Caire
In this article, we present a collection of radio map datasets in dense urban setting, which we generated and made publicly available.
1 code implementation • 15 Oct 2022 • Philipp Scholl, Aras Bacho, Holger Boche, Gitta Kutyniok
Finally, we provide extensive numerical experiments showing that our algorithms in combination with common approaches for learning physical laws indeed allow to guarantee that a unique governing differential equation is learnt, without assuming any knowledge about the function, thereby ensuring reliability.
no code implementations • 15 Oct 2022 • Raffaele Paolino, Aleksandar Bojchevski, Stephan Günnemann, Gitta Kutyniok, Ron Levie
A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius.
1 code implementation • 11 Jun 2022 • Duc Anh Nguyen, Ron Levie, Julian Lienen, Gitta Kutyniok, Eyke Hüllermeier
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems.
no code implementations • 30 May 2022 • Yangze Zhou, Gitta Kutyniok, Bruno Ribeiro
This work provides the first theoretical study on the ability of graph Message Passing Neural Networks (gMPNNs) -- such as Graph Neural Networks (GNNs) -- to perform inductive out-of-distribution (OOD) link prediction tasks, where deployment (test) graph sizes are larger than training graphs.
no code implementations • 5 Apr 2022 • Holger Boche, Adalbert Fono, Gitta Kutyniok
In this work, we show that real number computing in the framework of BSS machines does enable the algorithmic solvability of finite dimensional inverse problems.
no code implementations • 16 Mar 2022 • Gitta Kutyniok
We currently witness the spectacular success of artificial intelligence in both science and public life.
no code implementations • 28 Feb 2022 • Holger Boche, Adalbert Fono, Gitta Kutyniok
In fact, our result even holds for Borel-Turing computability., i. e., there does not exist an algorithm which performs the training of a neural network on digital hardware for any given accuracy.
1 code implementation • 1 Feb 2022 • Çağkan Yapar, Ron Levie, Gitta Kutyniok, Giuseppe Caire
We present LocUNet: A deep learning method for localization, based merely on Received Signal Strength (RSS) from Base Stations (BSs), which does not require any increase in computation complexity at the user devices with respect to the device standard operations, unlike methods that rely on time of arrival or angle of arrival information.
1 code implementation • 1 Feb 2022 • Mariia Seleznova, Gitta Kutyniok
We derive exact expressions for the NTK dispersion in the infinite-depth-and-width limit in all three phases and conclude that the NTK variability grows exponentially with depth at the EOC and in the chaotic phase but not in the ordered phase.
no code implementations • 1 Feb 2022 • Sohir Maskey, Ron Levie, Yunseok Lee, Gitta Kutyniok
Message passing neural networks (MPNN) have seen a steep rise in popularity since their introduction as generalizations of convolutional neural networks to graph-structured data, and are now considered state-of-the-art tools for solving a large variety of graph-focused problems.
no code implementations • 12 Oct 2021 • Stefan Kolek, Duc Anh Nguyen, Ron Levie, Joan Bruna, Gitta Kutyniok
We present the Rate-Distortion Explanation (RDE) framework, a mathematically well-founded method for explaining black-box model decisions.
1 code implementation • 7 Oct 2021 • Stefan Kolek, Duc Anh Nguyen, Ron Levie, Joan Bruna, Gitta Kutyniok
We present CartoonX (Cartoon Explanation), a novel model-agnostic explanation method tailored towards image classifiers and based on the rate-distortion explanation (RDE) framework.
no code implementations • 21 Sep 2021 • Sohir Maskey, Ron Levie, Gitta Kutyniok
Our main contributions can be summarized as follows: 1) we prove that any fixed GCNN with continuous filters is transferable under graphs that approximate the same graphon, 2) we prove transferability for graphs that approximate unbounded graphon shift operators, which are defined in this paper, and, 3) we obtain non-asymptotic approximation results, proving linear stability of GCNNs.
no code implementations • 12 Aug 2021 • Héctor Andrade-Loarca, Gitta Kutyniok, Ozan Öktem, Philipp Petersen
We present a deep learning-based algorithm to jointly solve a reconstruction problem and a wavefront set extraction problem in tomographic imaging.
1 code implementation • 23 Jun 2021 • Çağkan Yapar, Ron Levie, Gitta Kutyniok, Giuseppe Caire
Global Navigation Satellite Systems typically perform poorly in urban environments, where the likelihood of line-of-sight conditions between the devices and the satellites is low, and thus alternative localization methods are required for good accuracy.
no code implementations • 9 May 2021 • Julius Berner, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
We describe the new field of mathematical analysis of deep learning.
no code implementations • 8 Dec 2020 • Mariia Seleznova, Gitta Kutyniok
We find out that whether a network is in the NTK regime depends on the hyperparameters of random initialization and the network's depth.
no code implementations • 9 Jul 2020 • Ingo Gühring, Mones Raslan, Gitta Kutyniok
In this review paper, we give a comprehensive overview of the large variety of approximation results for neural networks.
no code implementations • 1 Jul 2020 • Alex Goeßmann, Gitta Kutyniok
In case of the NeuRIP event, we then provide bounds on the expected risk, which hold for networks in any sublevel set of the empirical risk.
no code implementations • 1 Jul 2020 • Cosmas Heiß, Ron Levie, Cinjon Resnick, Gitta Kutyniok, Joan Bruna
It is widely recognized that the predictions of deep neural networks are difficult to parse relative to simpler approaches.
no code implementations • 9 Jun 2020 • Çağkan Yapar, Ron Levie, Gitta Kutyniok, Giuseppe Caire
Using the approximations of the pathloss functions of all base stations and the reported signal strengths, we are able to extract a very accurate approximation of the location of the user.
1 code implementation • 25 Apr 2020 • Moritz Geist, Philipp Petersen, Mones Raslan, Reinhold Schneider, Gitta Kutyniok
Here, approximation theory predicts that the performance of the model should depend only very mildly on the dimension of the parameter space and is determined by the intrinsic dimension of the solution manifold of the parametric partial differential equation.
1 code implementation • 25 Mar 2020 • Luis Oala, Cosmas Heiß, Jan Macdonald, Maximilian März, Wojciech Samek, Gitta Kutyniok
We propose a fast, non-Bayesian method for producing uncertainty scores in the output of pre-trained deep neural networks (DNNs) using a data-driven interval propagating network.
1 code implementation • 27 Nov 2019 • Héctor Andrade-Loarca, Gitta Kutyniok, Ozan Öktem
This is based on the fact that edges in images contain most of the semantic information.
1 code implementation • 17 Nov 2019 • Ron Levie, Çağkan Yapar, Gitta Kutyniok, Giuseppe Caire
In this paper we propose a highly efficient and very accurate deep learning method for estimating the propagation pathloss from a point $x$ (transmitter location) to any point $y$ on a planar domain.
no code implementations • 30 Jul 2019 • Ron Levie, Wei Huang, Lorenzo Bucci, Michael M. Bronstein, Gitta Kutyniok
Transferability, which is a certain type of generalization capability, can be loosely defined as follows: if two graphs describe the same phenomenon, then a single filter or ConvNet should have similar repercussions on both graphs.
2 code implementations • 27 May 2019 • Jan Macdonald, Stephan Wäldchen, Sascha Hauch, Gitta Kutyniok
We formalise the widespread idea of interpreting neural network decisions as an explicit optimisation problem in a rate-distortion framework.
no code implementations • 3 May 2019 • Rémi Gribonval, Gitta Kutyniok, Morten Nielsen, Felix Voigtlaender
We study the expressivity of deep neural networks.
no code implementations • 31 Mar 2019 • Gitta Kutyniok, Philipp Petersen, Mones Raslan, Reinhold Schneider
We derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations.
no code implementations • 21 Feb 2019 • Ingo Gühring, Gitta Kutyniok, Philipp Petersen
We analyze approximation rates of deep ReLU neural networks for Sobolev-regular functions with respect to weaker Sobolev norms.
no code implementations • 29 Jan 2019 • Ron Levie, Elvin Isufi, Gitta Kutyniok
For filters in this space, the perturbation in the filter is bounded by a constant times the perturbation in the graph, and filters in the Cayley smoothness space are thus termed linearly stable.
no code implementations • 17 Jan 2019 • Dominik Alfke, Weston Baines, Jan Blechschmidt, Mauricio J. del Razo Sarmina, Amnon Drory, Dennis Elbrächter, Nando Farchmin, Matteo Gambara, Silke Glas, Philipp Grohs, Peter Hinz, Danijel Kivaranovic, Christian Kümmerle, Gitta Kutyniok, Sebastian Lunz, Jan Macdonald, Ryan Malthaner, Gregory Naisat, Ariel Neufeld, Philipp Christian Petersen, Rafael Reisenhofer, Jun-Da Sheng, Laura Thesing, Philipp Trunschke, Johannes von Lindheim, David Weber, Melanie Weber
We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results.
1 code implementation • 5 Jan 2019 • Héctor Andrade-Loarca, Gitta Kutyniok, Ozan Öktem, Philipp Petersen
Microlocal analysis provides deep insight into singularity structures and is often crucial for solving inverse problems, predominately, in imaging sciences.
no code implementations • 20 Aug 2018 • Martin Genzel, Gitta Kutyniok
We study the estimation capacity of the generalized Lasso, i. e., least squares minimization combined with a (convex) structural constraint.
no code implementations • 4 May 2017 • Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called \emph{affine systems}---can be approximated by deep neural networks with minimal connectivity and memory requirements.
no code implementations • 1 May 2017 • Jackie Ma, Maximilian März, Stephanie Funk, Jeanette Schulz-Menger, Gitta Kutyniok, Tobias Schaeffter, Christoph Kolbitsch
High-resolution three-dimensional (3D) cardiovascular magnetic resonance (CMR) is a valuable medical imaging technique, but its widespread application in clinical practice is hampered by long acquisition times.
no code implementations • 31 Aug 2016 • Martin Genzel, Gitta Kutyniok
In this paper, we study the challenge of feature selection based on a relatively small collection of sample pairs $\{(x_i, y_i)\}_{1 \leq i \leq m}$.
1 code implementation • 20 Jul 2016 • Rafael Reisenhofer, Sebastian Bosse, Gitta Kutyniok, Thomas Wiegand
In most practical situations, the compression or transmission of images and videos creates distortions that will eventually be perceived by a human observer.
Ranked #12 on
Video Quality Assessment
on MSU FR VQA Database
no code implementations • 11 Jun 2015 • Tim Conrad, Martin Genzel, Nada Cvetkovic, Niklas Wulkow, Alexander Leichtle, Jan Vybiral, Gitta Kutyniok, Christof Schütte
Results: We present a new algorithm, Sparse Proteomics Analysis (SPA), based on the theory of compressed sensing that allows us to identify a minimal discriminating set of features from mass spectrometry data-sets.