Search Results for author: Georgios Arvanitidis

Found 17 papers, 3 papers with code

Monge SAM: Robust Reparameterization-Invariant Sharpness-Aware Minimization Based on Loss Geometry

no code implementations12 Feb 2025 Albert Kjøller Jacobsen, Georgios Arvanitidis

Recent studies on deep neural networks show that flat minima of the loss landscape correlate with improved generalization.

Extended Neural Contractive Dynamical Systems: On Multiple Tasks and Riemannian Safety Regions

no code implementations18 Nov 2024 Hadi Beik Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo

Stability guarantees are crucial when ensuring that a fully autonomous robot does not take undesirable or potentially harmful actions.

Counterfactual Explanations via Riemannian Latent Space Traversal

no code implementations4 Nov 2024 Paraskevas Pegios, Aasa Feragen, Andreas Abildtrup Hansen, Georgios Arvanitidis

The adoption of increasingly complex deep models has fueled an urgent need for insight into how these models make predictions.

counterfactual Counterfactual Explanation +1

Neural Contractive Dynamical Systems

no code implementations17 Jan 2024 Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Nadia Figueroa, Gerhard Neumann, Leonel Rozo

Stability guarantees are crucial when ensuring a fully autonomous robot does not take undesirable or potentially harmful actions.

On the curvature of the loss landscape

no code implementations10 Jul 2023 Alison Pouplin, Hrittik Roy, Sidak Pal Singh, Georgios Arvanitidis

In this work, we consider the loss landscape as an embedded Riemannian manifold and show that the differential geometric properties of the manifold can be used when analyzing the generalization abilities of a deep net.

Reactive Motion Generation on Learned Riemannian Manifolds

no code implementations15 Mar 2022 Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo

We argue that Riemannian manifolds may be learned via human demonstrations in which geodesics are natural motion skills.

Motion Generation

On the Impact of Stable Ranks in Deep Nets

no code implementations5 Oct 2021 Bogdan Georgiev, Lukas Franken, Mayukh Mukherjee, Georgios Arvanitidis

A recent line of work has established intriguing connections between the generalization/compression properties of a deep neural network (DNN) model and the so-called layer weights' stable ranks.

Natural Questions

Pulling back information geometry

1 code implementation9 Jun 2021 Georgios Arvanitidis, Miguel González-Duque, Alison Pouplin, Dimitris Kalatzis, Søren Hauberg

Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models.

Decoder

Learning Riemannian Manifolds for Geodesic Motion Skills

no code implementations8 Jun 2021 Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo

For robots to work alongside humans and perform in unstructured environments, they must learn new motion skills and adapt them to unseen situations on the fly.

A prior-based approximate latent Riemannian metric

no code implementations9 Mar 2021 Georgios Arvanitidis, Bogdan Georgiev, Bernhard Schölkopf

In this work we propose a surrogate conformal Riemannian metric in the latent space of a generative model that is simple, efficient and robust.

Bayesian Quadrature on Riemannian Data Manifolds

1 code implementation12 Feb 2021 Christian Fröhlich, Alexandra Gessner, Philipp Hennig, Bernhard Schölkopf, Georgios Arvanitidis

Riemannian manifolds provide a principled way to model nonlinear geometric structure inherent in data.

Geometrically Enriched Latent Spaces

no code implementations2 Aug 2020 Georgios Arvanitidis, Søren Hauberg, Bernhard Schölkopf

A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space.

Variational Autoencoders with Riemannian Brownian Motion Priors

no code implementations ICML 2020 Dimitris Kalatzis, David Eklund, Georgios Arvanitidis, Søren Hauberg

Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean.

Fast and Robust Shortest Paths on Manifolds Learned from Data

no code implementations22 Jan 2019 Georgios Arvanitidis, Søren Hauberg, Philipp Hennig, Michael Schober

We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data.

Metric Learning

Geodesic Clustering in Deep Generative Models

no code implementations13 Sep 2018 Tao Yang, Georgios Arvanitidis, Dongmei Fu, Xiaogang Li, Søren Hauberg

Deep generative models are tremendously successful in learning low-dimensional latent representations that well-describe the data.

Clustering

Latent Space Oddity: on the Curvature of Deep Generative Models

1 code implementation ICLR 2018 Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg

Deep generative models provide a systematic way to learn nonlinear data distributions, through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space.

Clustering

A Locally Adaptive Normal Distribution

no code implementations NeurIPS 2016 Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg

The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric.

EEG

Cannot find the paper you are looking for? You can Submit a new open access paper.