Search Results for author: Diego Mesquita

Found 12 papers, 6 papers with code

Embarrassingly Parallel GFlowNets

no code implementations5 Jun 2024 Tiago da Silva, Luiz Max Carvalho, Amauri Souza, Samuel Kaski, Diego Mesquita

First, in parallel, we train a local GFlowNet targeting each $R_n$ and send the resulting models to the server.

In-n-Out: Calibrating Graph Neural Networks for Link Prediction

no code implementations7 Mar 2024 Erik Nascimento, Diego Mesquita, Samuel Kaski, Amauri H Souza

While networks for tabular or image data are usually overconfident, recent works have shown that graph neural networks (GNNs) show the opposite behavior for node-level classification.

Link Prediction

Human-in-the-Loop Causal Discovery under Latent Confounding using Ancestral GFlowNets

no code implementations21 Sep 2023 Tiago da Silva, Eliezer Silva, Adèle Ribeiro, António Góis, Dominik Heider, Samuel Kaski, Diego Mesquita

Surprisingly, while CD is a human-centered affair, no works have focused on building methods that both 1) output uncertainty estimates that can be verified by experts and 2) interact with those experts to iteratively refine CD.

Causal Discovery Causal Inference +1

Locking and Quacking: Stacking Bayesian model predictions by log-pooling and superposition

no code implementations12 May 2023 Yuling Yao, Luiz Max Carvalho, Diego Mesquita, Yann McLatchie

Currently, these predictive distributions are almost exclusively combined using linear mixtures such as Bayesian model averaging, Bayesian stacking, and mixture of experts.

Bayesian Inference

Distill n' Explain: explaining graph neural networks using simple surrogates

1 code implementation17 Mar 2023 Tamara Pereira, Erik Nascimento, Lucas E. Resck, Diego Mesquita, Amauri Souza

We also propose FastDnX, a faster version of DnX that leverages the linear decomposition of our surrogate model.

Knowledge Distillation

Provably expressive temporal graph networks

1 code implementation29 Sep 2022 Amauri H. Souza, Diego Mesquita, Samuel Kaski, Vikas Garg

Specifically, novel constructions reveal the inadequacy of MP-TGNs and WA-TGNs, proving that neither category subsumes the other.

Parallel MCMC Without Embarrassing Failures

1 code implementation22 Feb 2022 Daniel Augusto de Souza, Diego Mesquita, Samuel Kaski, Luigi Acerbi

While efficient, this framework is very sensitive to the quality of subposterior sampling.

Active Learning Bayesian Inference

Online graph nets

no code implementations29 Sep 2021 Hojin Kang, Jou-Hui Ho, Diego Mesquita, Jorge Pérez, Amauri H Souza

To avoid temporal message passing, OGN maintains a summary of the temporal neighbors of each node in a latent variable and updates it as events unroll, in an online fashion.

Link Prediction

Rethinking pooling in graph neural networks

1 code implementation NeurIPS 2020 Diego Mesquita, Amauri H. Souza, Samuel Kaski

In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph.

Clustering Graph Neural Network

Federated Stochastic Gradient Langevin Dynamics

1 code implementation23 Apr 2020 Khaoula El Mekkaoui, Diego Mesquita, Paul Blomstedt, Samuel Kaski

We apply conducive gradients to distributed stochastic gradient Langevin dynamics (DSGLD) and call the resulting method federated stochastic gradient Langevin dynamics (FSGLD).

Federated Learning Metric Learning

Learning GPLVM with arbitrary kernels using the unscented transformation

2 code implementations3 Jul 2019 Daniel Augusto R. M. A. de Souza, Diego Mesquita, César Lincoln C. Mattos, João Paulo P. Gomes

Gaussian Process Latent Variable Model (GPLVM) is a flexible framework to handle uncertain inputs in Gaussian Processes (GPs) and incorporate GPs as components of larger graphical models.

Dimensionality Reduction Gaussian Processes +1

Embarrassingly parallel MCMC using deep invertible transformations

no code implementations11 Mar 2019 Diego Mesquita, Paul Blomstedt, Samuel Kaski

While MCMC methods have become a main work-horse for Bayesian inference, scaling them to large distributed datasets is still a challenge.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.