Search Results for author: Thomas Markovich

Found 6 papers, 2 papers with code

QDC: Quantum Diffusion Convolution Kernels on Graphs

no code implementations20 Jul 2023 Thomas Markovich

In this work, we propose a new convolution kernel that effectively rewires the graph according to the occupation correlations of the vertices by trading on the generalized diffusion paradigm for the propagation of a quantum particle over the graph.

TwERC: High Performance Ensembled Candidate Generation for Ads Recommendation at Twitter

no code implementations27 Feb 2023 Vanessa Cai, Pradeep Prabakar, Manuel Serrano Rebuelta, Lucas Rosen, Federico Monti, Katarzyna Janocha, Tomo Lazovich, Jeetu Raj, Yedendra Shrinivasan, Hao Li, Thomas Markovich

We focus on the candidate generation phase of a large-scale ads recommendation problem in this paper, and present a machine learning first heterogeneous re-architecture of this stage which we term TwERC.

Recommendation Systems Vocal Bursts Intensity Prediction

Causally-guided Regularization of Graph Attention Improves Generalizability

no code implementations20 Oct 2022 Alexander P. Wu, Thomas Markovich, Bonnie Berger, Nils Hammerla, Rohit Singh

Graph attention networks estimate the relational importance of node neighbors to aggregate relevant information over local neighborhoods for a prediction task.

Causal Inference Graph Attention +1

Graph Neural Networks for Link Prediction with Subgraph Sketching

1 code implementation30 Sep 2022 Benjamin Paul Chamberlain, Sergey Shirobokov, Emanuele Rossi, Fabrizio Frasca, Thomas Markovich, Nils Hammerla, Michael M. Bronstein, Max Hansmire

Our experiments show that BUDDY also outperforms SGNNs on standard LP benchmarks while being highly scalable and faster than ELPH.

Link Prediction

Understanding convolution on graphs via energies

2 code implementations22 Jun 2022 Francesco Di Giovanni, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, Michael M. Bronstein

We do so by showing that linear graph convolutions with symmetric weights minimize a multi-particle energy that generalizes the Dirichlet energy; in this setting, the weight matrices induce edge-wise attraction (repulsion) through their positive (negative) eigenvalues, thereby controlling whether the features are being smoothed or sharpened.

Inductive Bias Node Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.