Search Results for author: Andreas Loukas

Found 37 papers, 12 papers with code

Batched Predictors Generalize within Distribution

no code implementations18 Jul 2023 Andreas Loukas, Pan Kessel

We study the generalization properties of batched predictors, i. e., models tasked with predicting the mean label of a small set (or batch) of examples.

Protein Discovery with Discrete Walk-Jump Sampling

1 code implementation8 Jun 2023 Nathan C. Frey, Daniel Berenberg, Karina Zadorozhny, Joseph Kleinhenz, Julien Lafrance-Vanasse, Isidro Hotzel, Yan Wu, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, Saeed Saremi

We resolve difficulties in training and sampling from a discrete generative model by learning a smoothed energy function, sampling from the smoothed data manifold with Langevin Markov chain Monte Carlo (MCMC), and projecting back to the true data manifold with one-step denoising.

Denoising

Infusing Lattice Symmetry Priors in Attention Mechanisms for Sample-Efficient Abstract Geometric Reasoning

no code implementations5 Jun 2023 Mattia Atzeni, Mrinmaya Sachan, Andreas Loukas

As a step towards this goal, we focus on geometry priors and introduce LatFormer, a model that incorporates lattice symmetry priors in attention masks.

Towards Understanding and Improving GFlowNet Training

1 code implementation11 May 2023 Max W. Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, Kyunghyun Cho, Tommaso Biancalani

We investigate how to learn better flows, and propose (i) prioritized replay training of high-reward $x$, (ii) relative edge flow policy parametrization, and (iii) a novel guided trajectory balance objective, and show how it can solve a substructure credit assignment problem.

On the generalization of learning algorithms that do not converge

no code implementations16 Aug 2022 Nisha Chandramoorthy, Andreas Loukas, Khashayar Gatmiry, Stefanie Jegelka

To reduce this discrepancy between theory and practice, this paper focuses on the generalization of neural networks whose training dynamics do not necessarily converge to fixed points.

Learning Theory

SPECTRE: Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators

1 code implementation4 Apr 2022 Karolis Martinkus, Andreas Loukas, Nathanaël Perraudin, Roger Wattenhofer

We approach the graph generation problem from a spectral perspective by first generating the dominant parts of the graph Laplacian spectrum and then building a graph matching these eigenvalues and eigenvectors.

Graph Generation Graph Matching

SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning

no code implementations NeurIPS 2021 Mattia Atzeni, Jasmina Bogojeska, Andreas Loukas

State-of-the-art approaches to reasoning and question answering over knowledge graphs (KGs) usually scale with the number of edges and can only be applied effectively on small instance-dependent subgraphs.

Knowledge Graphs Logical Reasoning +1

Neural Extensions: Training Neural Networks with Set Functions

no code implementations29 Sep 2021 Nikolaos Karalias, Joshua David Robinson, Andreas Loukas, Stefanie Jegelka

Our framework includes well-known extensions such as the Lovasz extension of submodular set functions and facilitates the design of novel continuous extensions based on problem-specific considerations, including constraints.

Combinatorial Optimization Image Classification

What training reveals about neural network complexity

1 code implementation NeurIPS 2021 Andreas Loukas, Marinos Poiitis, Stefanie Jegelka

This work explores the Benevolent Training Hypothesis (BTH) which argues that the complexity of the function a deep neural network (NN) is learning can be deduced by its training dynamics.

Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth

1 code implementation5 Mar 2021 Yihe Dong, Jean-Baptiste Cordonnier, Andreas Loukas

Attention-based architectures have become ubiquitous in machine learning, yet our understanding of the reasons for their effectiveness remains limited.

Inductive Bias

Multi-Head Attention: Collaborate Instead of Concatenate

2 code implementations29 Jun 2020 Jean-Baptiste Cordonnier, Andreas Loukas, Martin Jaggi

We also show that it is possible to re-parametrize a pre-trained multi-head attention layer into our collaborative attention layer.

Machine Translation Translation

Building powerful and equivariant graph neural networks with structural message-passing

1 code implementation NeurIPS 2020 Clement Vignac, Andreas Loukas, Pascal Frossard

We address this problem and propose a powerful and equivariant message-passing framework based on two ideas: first, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.

Graph Regression Inductive Bias

How hard is to distinguish graphs with graph neural networks?

no code implementations NeurIPS 2020 Andreas Loukas

A hallmark of graph neural networks is their ability to distinguish the isomorphism class of their inputs.

Graph Classification

On the Relationship between Self-Attention and Convolutional Layers

1 code implementation ICLR 2020 Jean-Baptiste Cordonnier, Andreas Loukas, Martin Jaggi

This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice.

Image Classification

What graph neural networks cannot learn: depth vs width

no code implementations ICLR 2020 Andreas Loukas

This paper studies the expressive power of graph neural networks falling within the message-passing framework (GNNmp).

Distributed Computing

Discriminative structural graph classification

no code implementations31 May 2019 Younjoo Seo, Andreas Loukas, Nathanaël Perraudin

This paper focuses on the discrimination capacity of aggregation functions: these are the permutation invariant functions used by graph neural networks to combine the features of nodes.

General Classification Graph Classification

The role of invariance in spectral complexity-based generalization bounds

no code implementations23 May 2019 Konstantinos Pitas, Andreas Loukas, Mike Davies, Pierre Vandergheynst

Deep convolutional neural networks (CNNs) have been shown to be able to fit a random labeling over data while still being able to generalize well for normal labels.

Generalization Bounds

Extrapolating paths with graph neural networks

1 code implementation18 Mar 2019 Jean-Baptiste Cordonnier, Andreas Loukas

We consider the problem of path inference: given a path prefix, i. e., a partially observed sequence of nodes in a graph, we want to predict which nodes are in the missing suffix.

Approximating Spectral Clustering via Sampling: a Review

no code implementations29 Jan 2019 Nicolas Tremblay, Andreas Loukas

Spectral clustering refers to a family of unsupervised learning algorithms that compute a spectral embedding of the original data based on the eigenvectors of a similarity graph.

Clustering

Graph reduction with spectral and cut guarantees

no code implementations31 Aug 2018 Andreas Loukas

Can one reduce the size of a graph without significantly altering its basic properties?

How Close Are the Eigenvectors of the Sample and Actual Covariance Matrices?

no code implementations ICML 2017 Andreas Loukas

How many samples are sufficient to guarantee that the eigenvectors of the sample covariance matrix are close to those of the actual covariance matrix?

Dimensionality Reduction

Fast Approximate Spectral Clustering for Dynamic Networks

no code implementations ICML 2018 Lionel Martin, Andreas Loukas, Pierre Vandergheynst

Spectral clustering is a widely studied problem, yet its complexity is prohibitive for dynamic graphs of even modest size.

Clustering

A Time-Vertex Signal Processing Framework

no code implementations5 May 2017 Francesco Grassi, Andreas Loukas, Nathanaël Perraudin, Benjamin Ricaud

An emerging way to deal with high-dimensional non-euclidean data is to assume that the underlying structure can be captured by a graph.

Denoising Video Inpainting

How close are the eigenvectors and eigenvalues of the sample and actual covariance matrices?

no code implementations17 Feb 2017 Andreas Loukas

How many samples are sufficient to guarantee that the eigenvectors and eigenvalues of the sample covariance matrix are close to those of the actual covariance matrix?

Stationary time-vertex signal processing

no code implementations1 Nov 2016 Andreas Loukas, Nathanaël Perraudin

This paper considers regression tasks involving high-dimensional multivariate processes whose structure is dependent on some {known} graph topology.

Denoising

Predicting the evolution of stationary graph signals

no code implementations12 Jul 2016 Andreas Loukas, Nathanael Perraudin

An emerging way of tackling the dimensionality issues arising in the modeling of a multivariate process is to assume that the inherent data structure can be captured by a graph.

Towards stationary time-vertex signal processing

no code implementations22 Jun 2016 Nathanael Perraudin, Andreas Loukas, Francesco Grassi, Pierre Vandergheynst

Graph-based methods for signal processing have shown promise for the analysis of data exhibiting irregular structure, such as those found in social, transportation, and sensor networks.

Denoising

Autoregressive Moving Average Graph Filtering

no code implementations14 Feb 2016 Elvin Isufi, Andreas Loukas, Andrea Simonetto, Geert Leus

We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation.

Denoising Philosophy

Frequency Analysis of Temporal Graph Signals

no code implementations14 Feb 2016 Andreas Loukas, Damien Foucard

This letter extends the concept of graph-frequency to graph signals that evolve with time.

Cannot find the paper you are looking for? You can Submit a new open access paper.