no code implementations • ICML 2020 • Futoshi Futami, Issei Sato, Masashi Sugiyama
Compared with the naive parallel-chain SGLD that updates multiple particles independently, ensemble methods update particles with their interactions.
no code implementations • 10 Jun 2024 • Masahiro Fujisawa, Futoshi Futami
Nonparametric estimation with binning is widely employed in the calibration error evaluation and the recalibration of machine learning models.
no code implementations • 24 May 2024 • Futoshi Futami, Masahiro Fujisawa
While the expected calibration error (ECE), which employs binning, is widely adopted to evaluate the calibration performance of machine learning models, theoretical understanding of its estimation bias is limited.
no code implementations • 23 Jul 2023 • Futoshi Futami, Tomoharu Iwata
Furthermore, we extend the existing analysis of Bayesian meta-learning and show the novel sensitivities among tasks for the first time.
no code implementations • 2 Jun 2022 • Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama
Bayesian deep learning plays an important role especially for its ability evaluating epistemic uncertainty (EU).
no code implementations • NeurIPS 2021 • Futoshi Futami, Tomoharu Iwata, Naonori Ueda, Issei Sato, Masashi Sugiyama
First, we provide a new second-order Jensen inequality, which has the repulsion term based on the loss function.
no code implementations • 10 Mar 2020 • Hideaki Imamura, Nontawat Charoenphakdee, Futoshi Futami, Issei Sato, Junya Honda, Masashi Sugiyama
If the black-box function varies with time, then time-varying Bayesian optimization is a promising framework.
no code implementations • 21 May 2018 • Futoshi Futami, Zhenghang Cui, Issei Sato, Masashi Sugiyama
Another example is the Stein points (SP) method, which minimizes kernelized Stein discrepancy directly.
1 code implementation • 18 Oct 2017 • Futoshi Futami, Issei Sato, Masashi Sugiyama
In this paper, based on Zellner's optimization and variational formulation of Bayesian inference, we propose an outlier-robust pseudo-Bayesian variational method by replacing the Kullback-Leibler divergence used for data fitting to a robust divergence such as the beta- and gamma-divergences.
no code implementations • NeurIPS 2017 • Futoshi Futami, Issei Sato, Masashi Sugiyama
Exponential family distributions are highly useful in machine learning since their calculation can be performed efficiently through natural parameters.