Search Results for author: Nikita Kotelevskii

Found 9 papers, 2 papers with code

Predictive Uncertainty Quantification via Risk Decompositions for Strictly Proper Scoring Rules

no code implementations16 Feb 2024 Nikita Kotelevskii, Maxim Panov

Distinguishing sources of predictive uncertainty is of crucial importance in the application of forecasting models across various domains.

Uncertainty Quantification

Dirichlet-based Uncertainty Quantification for Personalized Federated Learning with Improved Posterior Networks

no code implementations18 Dec 2023 Nikita Kotelevskii, Samuel Horváth, Karthik Nandakumar, Martin Takáč, Maxim Panov

This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones that would perform better for a particular input point.

Personalized Federated Learning Uncertainty Quantification

Learning Confident Classifiers in the Presence of Label Noise

no code implementations2 Jan 2023 Asma Ahmed Hashmi, Aigerim Zhumabayeva, Nikita Kotelevskii, Artem Agafonov, Mohammad Yaqub, Maxim Panov, Martin Takáč

We evaluate the proposed method on a series of classification tasks such as noisy versions of MNIST, CIFAR-10, Fashion-MNIST datasets as well as CIFAR-10N, which is real-world dataset with noisy human annotations.

Image Segmentation Medical Image Segmentation +2

FedPop: A Bayesian Approach for Personalised Federated Learning

no code implementations7 Jun 2022 Nikita Kotelevskii, Maxime Vono, Eric Moulines, Alain Durmus

We provide non-asymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks.

Federated Learning Uncertainty Quantification

Monte Carlo Variational Auto-Encoders

2 code implementations30 Jun 2021 Achille Thin, Nikita Kotelevskii, Arnaud Doucet, Alain Durmus, Eric Moulines, Maxim Panov

Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO).

Nonreversible MCMC from conditional invertible transforms: a complete recipe with convergence guarantees

no code implementations31 Dec 2020 Achille Thin, Nikita Kotelevskii, Christophe Andrieu, Alain Durmus, Eric Moulines, Maxim Panov

This paper fills the gap by developing general tools to ensure that a class of nonreversible Markov kernels, possibly relying on complex transforms, has the desired invariance property and leads to convergent algorithms.

Cannot find the paper you are looking for? You can Submit a new open access paper.