no code implementations • 6 Feb 2024 • Olivier Jeunen, Aleksei Ustimenko
Online controlled experiments are a crucial tool to allow for confident decision-making in technology companies.
no code implementations • 8 Jan 2024 • Hitesh Sagtani, Olivier Jeunen, Aleksei Ustimenko
Many platforms on the web present ranked lists of content to users, typically optimized for engagement-, satisfaction- or retention- driven metrics.
no code implementations • 8 Jan 2024 • Shubham Baweja, Neeti Pokharna, Aleksei Ustimenko, Olivier Jeunen
The main culprit for this inefficiency is the variance of the online metrics.
no code implementations • 4 Dec 2023 • Olivier Jeunen, Hitesh Sagtani, Himanshu Doi, Rasul Karimov, Neeti Pokharna, Danish Kalim, Aleksei Ustimenko, Christopher Green, Wenzhe Shi, Rishabh Mehrotra
We highlight (1) neural networks' ability to handle large training data size, user- and item-embeddings allows for more accurate models than GBDTs in this setting, and (2) because GBDTs are less reliant on specialised hardware, they can provide an equally accurate model at a lower cost.
no code implementations • 9 Oct 2023 • Aleksei Ustimenko, Aleksandr Beznosikov
In this work, we consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Stochastic Differential Equation.
1 code implementation • 27 Jul 2023 • Olivier Jeunen, Ivan Potapov, Aleksei Ustimenko
Through a correlation analysis between off- and on-line experiments conducted on a large-scale recommendation platform, we show that our unbiased DCG estimates strongly correlate with online reward, even when some of the metric's inherent assumptions are violated.
2 code implementations • 31 May 2023 • Elena Orlova, Aleksei Ustimenko, Ruoxi Jiang, Peter Y. Lu, Rebecca Willett
This paper introduces a novel deep-learning-based approach for numerical simulation of a time-evolving Schr\"odinger equation inspired by stochastic mechanics and generative diffusion models.
2 code implementations • 11 Jun 2022 • Aleksei Ustimenko, Artem Beliakov, Liudmila Prokhorenkova
Thus, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance.
no code implementations • 4 Apr 2022 • Ivan Lyzhin, Aleksei Ustimenko, Andrey Gulin, Liudmila Prokhorenkova
To address these questions, we compare LambdaMART with YetiRank and StochasticRank methods and their modifications.
no code implementations • ICLR 2021 • Andrey Malinin, Liudmila Prokhorenkova, Aleksei Ustimenko
For many practical, high-risk applications, it is essential to quantify uncertainty in a model's predictions to avoid costly mistakes.
no code implementations • ICML 2020 • Aleksei Ustimenko, Liudmila Prokhorenkova
The problem is ill-posed due to the discrete structure of the loss, and to deal with that, we introduce two important techniques: stochastic smoothing and novel gradient estimate based on partial integration.
no code implementations • 20 Jan 2020 • Aleksei Ustimenko, Liudmila Prokhorenkova
This paper introduces Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework that may deal with a wide range of loss functions and has provable generalization guarantees.