no code implementations • 7 Mar 2024 • Erik Nascimento, Diego Mesquita, Samuel Kaski, Amauri H Souza
While networks for tabular or image data are usually overconfident, recent works have shown that graph neural networks (GNNs) show the opposite behavior for node-level classification.
no code implementations • 21 Sep 2023 • Tiago da Silva, Eliezer Silva, Adèle Ribeiro, António Góis, Dominik Heider, Samuel Kaski, Diego Mesquita
Surprisingly, while CD is a human-centered affair, no works have focused on building methods that both 1) output uncertainty estimates that can be verified by experts and 2) interact with those experts to iteratively refine CD.
no code implementations • 12 May 2023 • Yuling Yao, Luiz Max Carvalho, Diego Mesquita, Yann McLatchie
Currently, these predictive distributions are almost exclusively combined using linear mixtures such as Bayesian model averaging, Bayesian stacking, and mixture of experts.
1 code implementation • 17 Mar 2023 • Tamara Pereira, Erik Nascimento, Lucas E. Resck, Diego Mesquita, Amauri Souza
We also propose FastDnX, a faster version of DnX that leverages the linear decomposition of our surrogate model.
1 code implementation • 29 Sep 2022 • Amauri H. Souza, Diego Mesquita, Samuel Kaski, Vikas Garg
Specifically, novel constructions reveal the inadequacy of MP-TGNs and WA-TGNs, proving that neither category subsumes the other.
1 code implementation • 22 Feb 2022 • Daniel Augusto de Souza, Diego Mesquita, Samuel Kaski, Luigi Acerbi
While efficient, this framework is very sensitive to the quality of subposterior sampling.
no code implementations • 29 Sep 2021 • Hojin Kang, Jou-Hui Ho, Diego Mesquita, Jorge Pérez, Amauri H Souza
To avoid temporal message passing, OGN maintains a summary of the temporal neighbors of each node in a latent variable and updates it as events unroll, in an online fashion.
1 code implementation • NeurIPS 2020 • Diego Mesquita, Amauri H. Souza, Samuel Kaski
In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph.
1 code implementation • 23 Apr 2020 • Khaoula El Mekkaoui, Diego Mesquita, Paul Blomstedt, Samuel Kaski
We apply conducive gradients to distributed stochastic gradient Langevin dynamics (DSGLD) and call the resulting method federated stochastic gradient Langevin dynamics (FSGLD).
2 code implementations • 3 Jul 2019 • Daniel Augusto R. M. A. de Souza, Diego Mesquita, César Lincoln C. Mattos, João Paulo P. Gomes
Gaussian Process Latent Variable Model (GPLVM) is a flexible framework to handle uncertain inputs in Gaussian Processes (GPs) and incorporate GPs as components of larger graphical models.
no code implementations • 11 Mar 2019 • Diego Mesquita, Paul Blomstedt, Samuel Kaski
While MCMC methods have become a main work-horse for Bayesian inference, scaling them to large distributed datasets is still a challenge.