no code implementations • 17 Jan 2024 • Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Nadia Figueroa, Gerhard Neumann, Leonel Rozo
Stability guarantees are crucial when ensuring a fully autonomous robot does not take undesirable or potentially harmful actions.
no code implementations • 10 Jul 2023 • Alison Pouplin, Hrittik Roy, Sidak Pal Singh, Georgios Arvanitidis
In this work, we consider the loss landscape as an embedded Riemannian manifold and show that the differential geometric properties of the manifold can be used when analyzing the generalization abilities of a deep net.
no code implementations • 15 Mar 2022 • Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo
We argue that Riemannian manifolds may be learned via human demonstrations in which geodesics are natural motion skills.
no code implementations • 5 Oct 2021 • Bogdan Georgiev, Lukas Franken, Mayukh Mukherjee, Georgios Arvanitidis
A recent line of work has established intriguing connections between the generalization/compression properties of a deep neural network (DNN) model and the so-called layer weights' stable ranks.
1 code implementation • 9 Jun 2021 • Georgios Arvanitidis, Miguel González-Duque, Alison Pouplin, Dimitris Kalatzis, Søren Hauberg
Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models.
no code implementations • 8 Jun 2021 • Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo
For robots to work alongside humans and perform in unstructured environments, they must learn new motion skills and adapt them to unseen situations on the fly.
no code implementations • 9 Mar 2021 • Georgios Arvanitidis, Bogdan Georgiev, Bernhard Schölkopf
In this work we propose a surrogate conformal Riemannian metric in the latent space of a generative model that is simple, efficient and robust.
1 code implementation • 12 Feb 2021 • Christian Fröhlich, Alexandra Gessner, Philipp Hennig, Bernhard Schölkopf, Georgios Arvanitidis
Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data.
no code implementations • 2 Aug 2020 • Georgios Arvanitidis, Søren Hauberg, Bernhard Schölkopf
A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space.
no code implementations • ICML 2020 • Dimitris Kalatzis, David Eklund, Georgios Arvanitidis, Søren Hauberg
Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean.
no code implementations • 22 Jan 2019 • Georgios Arvanitidis, Søren Hauberg, Philipp Hennig, Michael Schober
We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data.
no code implementations • 13 Sep 2018 • Tao Yang, Georgios Arvanitidis, Dongmei Fu, Xiaogang Li, Søren Hauberg
Deep generative models are tremendously successful in learning low-dimensional latent representations that well-describe the data.
1 code implementation • ICLR 2018 • Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg
Deep generative models provide a systematic way to learn nonlinear data distributions, through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space.
no code implementations • NeurIPS 2016 • Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg
The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric.