Search Results for author: Anna Korba

Found 16 papers, 7 papers with code

Bayesian Off-Policy Evaluation and Learning for Large Action Spaces

no code implementations22 Feb 2024 Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba

In this framework, we propose sDM, a generic Bayesian approach designed for OPE and OPL, grounded in both algorithmic and theoretical foundations.

Computational Efficiency Off-policy evaluation

A connection between Tempering and Entropic Mirror Descent

no code implementations18 Oct 2023 Nicolas Chopin, Francesca R. Crucinio, Anna Korba

This paper explores the connections between tempering (for Sequential Monte Carlo; SMC) and entropic mirror descent to sample from a target probability distribution whose unnormalized density is known.

Exponential Smoothing for Off-Policy Learning

no code implementations25 May 2023 Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba

In particular, it is also valid for standard IPS without making the assumption that the importance weights are bounded.


Sampling with Mollified Interaction Energy Descent

2 code implementations24 Oct 2022 Lingxiao Li, Qiang Liu, Anna Korba, Mikhail Yurochkin, Justin Solomon

These energies rely on mollifier functions -- smooth approximations of the Dirac delta originated from PDE theory.

Variational Inference of overparameterized Bayesian Neural Networks: a theoretical and empirical study

1 code implementation8 Jul 2022 Tom Huix, Szymon Majewski, Alain Durmus, Eric Moulines, Anna Korba

This paper studies the Variational Inference (VI) used for training Bayesian Neural Networks (BNN) in the overparameterized regime, i. e., when the number of neurons tends to infinity.

Variational Inference

Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and EM

no code implementations17 Jun 2022 Pierre-Cyril Aubin-Frankowski, Anna Korba, Flavien Léger

We also show that Expectation Maximization (EM) can always formally be written as a mirror descent.

Adaptive Importance Sampling meets Mirror Descent: a Bias-variance tradeoff

no code implementations29 Oct 2021 Anna Korba, François Portier

Adaptive importance sampling is a widely spread Monte Carlo technique that uses a re-weighting strategy to iteratively estimate the so-called target distribution.

Kernel Stein Discrepancy Descent

2 code implementations20 May 2021 Anna Korba, Pierre-Cyril Aubin-Frankowski, Szymon Majewski, Pierre Ablin

We investigate the properties of its Wasserstein gradient flow to approximate a target probability distribution $\pi$ on $\mathbb{R}^d$, known up to a normalization constant.

Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment Restriction

2 code implementations10 May 2021 Afsaneh Mastouri, Yuchen Zhu, Limor Gultchin, Anna Korba, Ricardo Silva, Matt J. Kusner, Arthur Gretton, Krikamol Muandet

In particular, we provide a unifying view of two-stage and moment restriction approaches for solving this problem in a nonlinear setting.

Vocal Bursts Valence Prediction

A Non-Asymptotic Analysis for Stein Variational Gradient Descent

no code implementations NeurIPS 2020 Anna Korba, Adil Salim, Michael Arbel, Giulia Luise, Arthur Gretton

We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution $\pi\propto e^{-V}$ on $\mathbb{R}^d$.


The Wasserstein Proximal Gradient Algorithm

no code implementations NeurIPS 2020 Adil Salim, Anna Korba, Giulia Luise

Using techniques from convex optimization and optimal transport, we analyze the FB scheme as a minimization algorithm on the Wasserstein space.

Maximum Mean Discrepancy Gradient Flow

1 code implementation NeurIPS 2019 Michael Arbel, Anna Korba, Adil Salim, Arthur Gretton

We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties.

Dimensionality Reduction and (Bucket) Ranking: a Mass Transportation Approach

1 code implementation15 Oct 2018 Mastane Achab, Anna Korba, Stephan Clémençon

Whereas most dimensionality reduction techniques (e. g. PCA, ICA, NMF) for multivariate data essentially rely on linear algebra to a certain extent, summarizing ranking data, viewed as realizations of a random permutation $\Sigma$ on a set of items indexed by $i\in \{1,\ldots,\; n\}$, is a great statistical challenge, due to the absence of vector space structure for the set of permutations $\mathfrak{S}_n$.

Dimensionality Reduction

Ranking Median Regression: Learning to Order through Local Consensus

no code implementations31 Oct 2017 Stephan Clémençon, Anna Korba, Eric Sibony

In the probabilistic formulation of the 'Learning to Order' problem we propose, which extends the framework for statistical Kemeny ranking aggregation developped in \citet{CKS17}, this boils down to recovering conditional Kemeny medians of $\Sigma$ given $X$ from i. i. d.


Cannot find the paper you are looking for? You can Submit a new open access paper.