no code implementations • 28 Jan 2025 • Fei Cao, Kimball Johnston, Thomas Laurent, Justin Le, Sébastien Motsch
Moreover, we derive an explicit solution to the reverse process's SDE under the assumption that the starting point of the forward process is fixed.
1 code implementation • 22 May 2024 • Nian Liu, Xiaoxin He, Thomas Laurent, Francesco Di Giovanni, Michael M. Bronstein, Xavier Bresson
Spectral graph convolution, an important tool of data filtering on graphs, relies on two essential decisions; selecting spectral bases for signal transformation and parameterizing the kernel for frequency analysis.
1 code implementation • 19 Mar 2024 • Thomas Laurent
This paper introduces the task of "train ego-path detection", a refined approach to railway track detection designed for intelligent onboard vision systems.
1 code implementation • 12 Feb 2024 • Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V. Chawla, Thomas Laurent, Yann Lecun, Xavier Bresson, Bryan Hooi
Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface.
3 code implementations • 31 May 2023 • Xiaoxin He, Xavier Bresson, Thomas Laurent, Adam Perold, Yann Lecun, Bryan Hooi
With the advent of powerful large language models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and to utilize general knowledge, there is a growing need for techniques which combine the textual modelling abilities of LLMs with the structural learning capabilities of GNNs.
Ranked #2 on
Node Property Prediction
on ogbn-arxiv
(using extra training data)
1 code implementation • 25 May 2023 • Thomas Laurent, James H. von Brecht, Xavier Bresson
We formalize and study a phenomenon called feature collapse that makes precise the intuitive idea that entities playing a similar role in a learning task receive similar representations.
3 code implementations • 27 Dec 2022 • Xiaoxin He, Bryan Hooi, Thomas Laurent, Adam Perold, Yann Lecun, Xavier Bresson
First, they capture long-range dependency and mitigate the issue of over-squashing as demonstrated on Long Range Graph Benchmark and TreeNeighbourMatch datasets.
Ranked #5 on
Graph Regression
on Peptides-struct
1 code implementation • 1 Jun 2022 • Lovro Vrček, Xavier Bresson, Thomas Laurent, Martin Schmitz, Mile Šikić
In this work, we explore a different approach to the central part of the genome assembly task that consists of untangling a large assembly graph from which a genomic sequence needs to be reconstructed.
no code implementations • 29 May 2022 • Thomas Laurent, James H. von Brecht, Xavier Bresson
Our data model follows a long-tailed distribution in the sense that some rare subcategories have few representatives in the training set.
1 code implementation • 2 Mar 2022 • Yong Liang Goh, Wee Sun Lee, Xavier Bresson, Thomas Laurent, Nicholas Lim
This paper exemplifies the integration of entropic regularized optimal transport techniques as a layer in a deep reinforcement learning network.
1 code implementation • ICLR 2022 • Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson
An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers.
Ranked #14 on
Graph Regression
on ZINC-500k
no code implementations • 29 Sep 2021 • Lovro Vrček, Robert Vaser, Thomas Laurent, Mile Sikic, Xavier Bresson
A quest to determine the human DNA sequence from telomere to telomere started three decades ago and was finally finished in 2021.
1 code implementation • 4 Mar 2021 • Xavier Bresson, Thomas Laurent
The Traveling Salesman Problem (TSP) is the most popular and most studied combinatorial problem, starting with von Neumann in 1951.
4 code implementations • 12 Jun 2020 • Chaitanya K. Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent
End-to-end training of neural network solvers for graph combinatorial optimization problems such as the Travelling Salesperson Problem (TSP) have seen a surge of interest recently, but remain intractable and inefficient beyond graphs with few hundreds of nodes.
15 code implementations • 2 Mar 2020 • Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
Ranked #1 on
Link Prediction
on COLLAB
2 code implementations • 16 Oct 2019 • Chaitanya K. Joshi, Thomas Laurent, Xavier Bresson
We explore the impact of learning paradigms on training deep neural networks for the Travelling Salesman Problem.
no code implementations • 8 Jun 2019 • Xavier Bresson, Thomas Laurent
In this work, we introduce a simple two-step decoding process.
4 code implementations • 4 Jun 2019 • Chaitanya K. Joshi, Thomas Laurent, Xavier Bresson
This paper introduces a new learning-based approach for approximately solving the Travelling Salesman Problem on 2D Euclidean graphs.
1 code implementation • 15 Apr 2019 • Yao Yang Leow, Thomas Laurent, Xavier Bresson
Our proposed method GraphTSNE produces visualizations which account for both graph structure and node features.
no code implementations • ICML 2018 • Thomas Laurent, James Brecht
We consider deep linear networks with arbitrary convex differentiable loss.
no code implementations • ICML 2018 • Thomas Laurent, James Von Brecht
By appealing to harmonic analysis we show that all local minima of such network are non-differentiable, except for those minima that occur in a region of parameter space where the loss surface is perfectly flat.
no code implementations • 5 Dec 2017 • Thomas Laurent, James Von Brecht
We consider deep linear networks with arbitrary convex differentiable loss.
1 code implementation • ICLR 2018 • Xavier Bresson, Thomas Laurent
In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks.
Ranked #7 on
Node Classification
on PATTERN 100k
no code implementations • 19 Dec 2016 • Thomas Laurent, James Von Brecht
We introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task.
1 code implementation • NeurIPS 2016 • Thomas Laurent, James Von Brecht, Xavier Bresson, Arthur Szlam
We introduce a theoretical and algorithmic framework for multi-way graph partitioning that relies on a multiplicative cut-based objective.
1 code implementation • 11 Jan 2016 • Thomas Laurent, Anthony Ventresque, Mike Papadakis, Christopher Henard, Yves Le Traon
We therefore examine how effective are the mutants of a popular mutation testing tool, named PIT, compared to comprehensive ones, as drawn from the literature and personal experience.
Software Engineering
no code implementations • 19 Jun 2015 • Xavier Bresson, Thomas Laurent, James Von Brecht
This work aims at recovering signals that are sparse on graphs.
no code implementations • 24 Nov 2014 • Nicolas Garcia Trillos, Dejan Slepcev, James Von Brecht, Thomas Laurent, Xavier Bresson
We consider point clouds obtained as samples of a ground-truth measure.
no code implementations • 15 Jun 2014 • Xavier Bresson, Huiyi Hu, Thomas Laurent, Arthur Szlam, James Von Brecht
In this work we propose a simple and easily parallelizable algorithm for multiway graph partitioning.
no code implementations • NeurIPS 2013 • Xavier Bresson, Thomas Laurent, David Uminsky, James H. von Brecht
Ideas from the image processing literature have recently motivated a new set of clustering algorithms that rely on the concept of total variation.
no code implementations • NeurIPS 2012 • Xavier Bresson, Thomas Laurent, David Uminsky, James V. Brecht
Unsupervised clustering of scattered, noisy and high-dimensional data points is an important and difficult problem.