no code implementations • 12 Dec 2024 • Mikhail Mironov, Liudmila Prokhorenkova
To show this, previous works on graph homophily suggested several properties desirable for a good homophily measure, also noting that no existing homophily measure has all these properties.
no code implementations • 18 Oct 2024 • Mikhail Mironov, Liudmila Prokhorenkova
We show that none of the existing measures has all three properties and thus these measures are not suitable for quantifying diversity.
1 code implementation • 27 Sep 2024 • Fedor Velikonivtsev, Mikhail Mironov, Liudmila Prokhorenkova
First, we discuss how to define diversity for a set of graphs, why this task is non-trivial, and how one can choose a proper diversity measure.
1 code implementation • 22 Sep 2024 • Gleb Bazhenov, Oleg Platonov, Liudmila Prokhorenkova
Thus, there is a critical difference between the data used in tabular and graph machine learning studies, which does not allow one to understand how successfully graph models can be transferred to tabular data.
1 code implementation • 18 Feb 2024 • Gleb Rodionov, Liudmila Prokhorenkova
Neural algorithmic reasoning aims to capture computations with neural networks via learning the models to imitate the execution of classic algorithms.
3 code implementations • 22 Feb 2023 • Oleg Platonov, Denis Kuznedelev, Michael Diskin, Artem Babenko, Liudmila Prokhorenkova
Graphs without this property are called heterophilous, and it is typically assumed that specialized methods are required to achieve strong performance on such graphs.
no code implementations • NeurIPS 2023 • Oleg Platonov, Denis Kuznedelev, Artem Babenko, Liudmila Prokhorenkova
For this, we formalize desirable properties for a proper homophily measure and verify which measures satisfy which properties.
2 code implementations • 11 Jun 2022 • Aleksei Ustimenko, Artem Beliakov, Liudmila Prokhorenkova
Thus, we obtain the convergence to a Gaussian Process' posterior mean, which, in turn, allows us to easily transform gradient boosting into a sampler from the posterior to provide better knowledge uncertainty estimates through Monte-Carlo estimation of the posterior variance.
no code implementations • 4 Apr 2022 • Ivan Lyzhin, Aleksei Ustimenko, Andrey Gulin, Liudmila Prokhorenkova
To address these questions, we compare LambdaMART with YetiRank and StochasticRank methods and their modifications.
1 code implementation • NeurIPS 2021 • Martijn Gösgens, Anton Zhiyanov, Alexey Tikhonov, Liudmila Prokhorenkova
Several performance measures can be used for evaluating classification results: accuracy, F-measure, and many others.
no code implementations • ICLR 2022 • Liudmila Prokhorenkova, Dmitry Baranchuk, Nikolay Bogachev, Yury Demidovich, Alexander Kolpakov
From a theoretical perspective, we rigorously analyze the time and space complexity of graph-based NNS, assuming that an n-element dataset is uniformly distributed within a d-dimensional ball of radius R in the hyperbolic space of curvature -1.
3 code implementations • 15 Jul 2021 • Andrey Malinin, Neil Band, Ganshin, Alexander, German Chesnokov, Yarin Gal, Mark J. F. Gales, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, Vyas Raina, Roginskiy, Denis, Mariya Shmatova, Panos Tigas, Boris Yangel
However, many tasks of practical interest have different modalities, such as tabular data, audio, text, or sensor data, which offer significant challenges involving regression and discrete or continuous structured prediction.
Ranked #2 on
Weather Forecasting
on Shifts
1 code implementation • ICLR 2021 • Sergei Ivanov, Liudmila Prokhorenkova
Previous GNN models have mostly focused on networks with homogeneous sparse features and, as we show, are suboptimal in the heterogeneous setting.
1 code implementation • EMNLP 2020 • Max Ryabinin, Sergei Popov, Liudmila Prokhorenkova, Elena Voita
We adopt a recent method learning a representation of data in the form of a differentiable weighted graph and use it to modify the GloVe training algorithm.
2 code implementations • NeurIPS 2021 • Kirill Shevkunov, Liudmila Prokhorenkova
We generalize the concept of product space and introduce an overlapping space that does not have the configuration search problem.
no code implementations • ICLR 2021 • Andrey Malinin, Liudmila Prokhorenkova, Aleksei Ustimenko
For many practical, high-risk applications, it is essential to quantify uncertainty in a model's predictions to avoid costly mistakes.
no code implementations • ICML 2020 • Aleksei Ustimenko, Liudmila Prokhorenkova
The problem is ill-posed due to the discrete structure of the loss, and to deal with that, we introduce two important techniques: stochastic smoothing and novel gradient estimate based on partial integration.
no code implementations • 20 Jan 2020 • Aleksei Ustimenko, Liudmila Prokhorenkova
This paper introduces Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework that may deal with a wide range of loss functions and has provable generalization guarantees.
no code implementations • 25 Sep 2019 • Liudmila Prokhorenkova, Egor Samosvat, Pim van der Hoorn
We show that optimal curvature essentially depends on dimensionality of the embedding space and loss function one aims to minimize via embedding.
1 code implementation • ICML 2020 • Liudmila Prokhorenkova, Aleksandr Shekhovtsov
Graph-based approaches are empirically shown to be very successful for the nearest neighbor search (NNS).
Data Structures and Algorithms Probability
10 code implementations • NeurIPS 2018 • Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, Andrey Gulin
This paper presents the key algorithmic techniques behind CatBoost, a new gradient boosting toolkit.