Search Results for author: Marina Munkhoeva

Found 8 papers, 3 papers with code

Spectal Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning

no code implementations31 May 2023 Marina Munkhoeva, Ivan Oseledets

Self-supervised methods received tremendous attention thanks to their seemingly heuristic approach to learning representations that respect the semantics of the data without any apparent supervision in the form of labels.

Inductive Bias Low-Rank Matrix Completion +2

Unsupervised Embedding Quality Evaluation

no code implementations26 May 2023 Anton Tsitsulin, Marina Munkhoeva, Bryan Perozzi

Unsupervised learning has recently significantly gained in popularity, especially with deep learning-based approaches.

Self-Supervised Learning

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

1 code implementation22 Sep 2021 Mikhail Pautov, Nurislam Tursynbek, Marina Munkhoeva, Nikita Muravev, Aleksandr Petiushko, Ivan Oseledets

In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks -- small modifications of the input that change the predictions.

Adversarial Robustness

GRASP: Graph Alignment through Spectral Signatures

no code implementations10 Jun 2021 Judith Hermanns, Anton Tsitsulin, Marina Munkhoeva, Alex Bronstein, Davide Mottin, Panagiotis Karras

In this paper, we transfer the shape-analysis concept of functional maps from the continuous to the discrete case, and treat the graph alignment problem as a special case of the problem of finding a mapping between functions on graphs.

FREDE: Anytime Graph Embeddings

no code implementations8 Jun 2020 Anton Tsitsulin, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Ivan Oseledets, Emmanuel Müller

Low-dimensional representations, or embeddings, of a graph's nodes facilitate several practical data science and data engineering tasks.

Graph Embedding

The Shape of Data: Intrinsic Distance for Data Distributions

2 code implementations ICLR 2020 Anton Tsitsulin, Marina Munkhoeva, Davide Mottin, Panagiotis Karras, Alex Bronstein, Ivan Oseledets, Emmanuel Müller

The ability to represent and compare machine learning models is crucial in order to quantify subtle model changes, evaluate generative models, and gather insights on neural network architectures.

Cannot find the paper you are looking for? You can Submit a new open access paper.