Search Results for author: Aleksei Ustimenko

Found 12 papers, 3 papers with code

Learning Metrics that Maximise Power for Accelerated A/B-Tests

no code implementations6 Feb 2024 Olivier Jeunen, Aleksei Ustimenko

Online controlled experiments are a crucial tool to allow for confident decision-making in technology companies.

Decision Making

Learning-to-Rank with Nested Feedback

no code implementations8 Jan 2024 Hitesh Sagtani, Olivier Jeunen, Aleksei Ustimenko

Many platforms on the web present ranked lists of content to users, typically optimized for engagement-, satisfaction- or retention- driven metrics.

Learning-To-Rank

On Gradient Boosted Decision Trees and Neural Rankers: A Case-Study on Short-Video Recommendations at ShareChat

no code implementations4 Dec 2023 Olivier Jeunen, Hitesh Sagtani, Himanshu Doi, Rasul Karimov, Neeti Pokharna, Danish Kalim, Aleksei Ustimenko, Christopher Green, Wenzhe Shi, Rishabh Mehrotra

We highlight (1) neural networks' ability to handle large training data size, user- and item-embeddings allows for more accurate models than GBDTs in this setting, and (2) because GBDTs are less reliant on specialised hardware, they can provide an equally accurate model at a lower cost.

On (Normalised) Discounted Cumulative Gain as an Off-Policy Evaluation Metric for Top-$n$ Recommendation

1 code implementation27 Jul 2023 Olivier Jeunen, Ivan Potapov, Aleksei Ustimenko

Through a correlation analysis between off- and on-line experiments conducted on a large-scale recommendation platform, we show that our unbiased DCG estimates strongly correlate with online reward, even when some of the metric's inherent assumptions are violated.

Information Retrieval Off-policy evaluation

Deep Stochastic Mechanics

2 code implementations31 May 2023 Elena Orlova, Aleksei Ustimenko, Ruoxi Jiang, Peter Y. Lu, Rebecca Willett

This paper introduces a novel deep-learning-based approach for numerical simulation of a time-evolving Schr\"odinger equation inspired by stochastic mechanics and generative diffusion models.

Gradient Boosting Performs Gaussian Process Inference

2 code implementations11 Jun 2022 Aleksei Ustimenko, Artem Beliakov, Liudmila Prokhorenkova

Thus, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance.

regression

Which Tricks Are Important for Learning to Rank?

no code implementations4 Apr 2022 Ivan Lyzhin, Aleksei Ustimenko, Andrey Gulin, Liudmila Prokhorenkova

To address these questions, we compare LambdaMART with YetiRank and StochasticRank methods and their modifications.

Learning-To-Rank

Uncertainty in Gradient Boosting via Ensembles

no code implementations ICLR 2021 Andrey Malinin, Liudmila Prokhorenkova, Aleksei Ustimenko

For many practical, high-risk applications, it is essential to quantify uncertainty in a model's predictions to avoid costly mistakes.

General Classification

StochasticRank: Global Optimization of Scale-Free Discrete Functions

no code implementations ICML 2020 Aleksei Ustimenko, Liudmila Prokhorenkova

The problem is ill-posed due to the discrete structure of the loss, and to deal with that, we introduce two important techniques: stochastic smoothing and novel gradient estimate based on partial integration.

Learning-To-Rank

SGLB: Stochastic Gradient Langevin Boosting

no code implementations20 Jan 2020 Aleksei Ustimenko, Liudmila Prokhorenkova

This paper introduces Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework that may deal with a wide range of loss functions and has provable generalization guarantees.

Cannot find the paper you are looking for? You can Submit a new open access paper.