no code implementations • 8 Apr 2024 • Sergey Kastryulin, Artem Konev, Alexander Shishenya, Eugene Lyapustin, Artem Khurshudov, Alexander Tselousov, Nikita Vinokurov, Denis Kuznedelev, Alexander Markovich, Grigoriy Livshits, Alexey Kirillov, Anastasiia Tabisheva, Liubov Chubarova, Marina Kaminskaia, Alexander Ustyuzhanin, Artemii Shvetsov, Daniil Shlenskii, Valerii Startsev, Dmitrii Kornilov, Mikhail Romanov, Artem Babenko, Sergei Ovcharenko, Valentin Khrulkov
In the rapidly progressing field of generative models, the development of efficient and high-fidelity text-to-image diffusion systems represents a significant frontier.
no code implementations • 11 Mar 2024 • Sergey Kastryulin, Denis Prokopenko, Artem Babenko, Dmitry V. Dylov
This paper introduces a new data-driven, non-parametric method for image quality and aesthetics assessment, surpassing existing approaches and requiring no prompt engineering or fine-tuning.
1 code implementation • 11 Jan 2024 • Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh
The emergence of accurate open large language models (LLMs) has led to a race towards quantization techniques for such models enabling execution on end-user devices.
1 code implementation • 17 Dec 2023 • Nikita Starodubcev, Artem Fedorov, Artem Babenko, Dmitry Baranchuk
While several powerful distillation methods were recently proposed, the overall quality of student samples is typically lower compared to the teacher ones, which hinders their practical usage.
1 code implementation • 26 Jul 2023 • Yury Gorishniy, Ivan Rubachev, Nikolay Kartashev, Daniil Shlenskii, Akim Kotelnikov, Artem Babenko
Deep learning (DL) models for tabular data problems (e. g. classification, regression) are currently receiving increasingly more attention from researchers.
1 code implementation • 10 Apr 2023 • Nikita Starodubcev, Dmitry Baranchuk, Valentin Khrulkov, Artem Babenko
Finally, we show that our approach can adapt the pretrained model to the user-specified image and text description on the fly just for 4 seconds.
2 code implementations • 22 Feb 2023 • Oleg Platonov, Denis Kuznedelev, Michael Diskin, Artem Babenko, Liudmila Prokhorenkova
Graphs without this property are called heterophilous, and it is typically assumed that specialized methods are required to achieve strong performance on such graphs.
1 code implementation • NeurIPS 2023 • Anton Voronov, Mikhail Khoroshikh, Artem Babenko, Max Ryabinin
Text-to-image generation models represent the next step of evolution in image synthesis, offering a natural way to achieve flexible yet fine-grained control over the result.
3 code implementations • 30 Sep 2022 • Akim Kotelnikov, Dmitry Baranchuk, Ivan Rubachev, Artem Babenko
Denoising diffusion probabilistic models are currently becoming the leading paradigm of generative modeling for many important data modalities.
no code implementations • NeurIPS 2023 • Oleg Platonov, Denis Kuznedelev, Artem Babenko, Liudmila Prokhorenkova
For this, we formalize desirable properties for a proper homophily measure and verify which measures satisfy which properties.
2 code implementations • 7 Jul 2022 • Ivan Rubachev, Artem Alekberov, Yury Gorishniy, Artem Babenko
Recent deep learning models for tabular data currently compete with the traditional ML models based on decision trees (GBDT).
no code implementations • 8 May 2022 • Harsha Vardhan Simhadri, George Williams, Martin Aumüller, Matthijs Douze, Artem Babenko, Dmitry Baranchuk, Qi Chen, Lucas Hosseini, Ravishankar Krishnaswamy, Gopal Srinivasa, Suhas Jayaram Subramanya, Jingdong Wang
The outcome of the competition was ranked leaderboards of algorithms in each track based on recall at a query throughput threshold.
4 code implementations • 10 Mar 2022 • Yury Gorishniy, Ivan Rubachev, Artem Babenko
We start by describing two conceptually different approaches to building embedding modules: the first one is based on a piecewise linear encoding of scalar values, and the second one utilizes periodic activations.
1 code implementation • ICLR 2022 • Timofey Grigoryev, Andrey Voynov, Artem Babenko
The literature has proposed several methods to finetune pretrained GANs on new datasets, which typically results in higher performance compared to training from scratch, especially in the limited-data regime.
1 code implementation • ICLR 2022 • Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, Artem Babenko
Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance.
1 code implementation • ICCV 2021 • Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, Artem Babenko
Recent advances in high-fidelity semantic image editing heavily rely on the presumably disentangled latent spaces of the state-of-the-art generative models, such as StyleGAN.
1 code implementation • ICML Workshop INNF 2021 • Dmitry Baranchuk, Vladimir Aliev, Artem Babenko
Normalizing flows are a powerful class of generative models demonstrating strong performance in several speech and vision problems.
11 code implementations • NeurIPS 2021 • Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, Artem Babenko
The existing literature on deep learning for tabular data proposes a wide range of novel architectures and reports competitive results on various datasets.
1 code implementation • CVPR 2021 • Valentin Khrulkov, Artem Babenko
Given the dataset and the labels, we trained a CNN model that obtains a pair of images and for each image predicts a probability of being more preferable than its counterpart.
1 code implementation • ICML Workshop AutoML 2021 • Dmitry Baranchuk, Artem Babenko
In this study, we propose a task-agnostic approach that discovers initializers for specific network architectures and optimizers by learning the initial weight distributions directly through the use of Meta-Learning.
no code implementations • 11 Feb 2021 • Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, Artem Babenko
Constructing disentangled representations is known to be a difficult task, especially in the unsupervised scenario.
no code implementations • 8 Feb 2021 • Valentin Khrulkov, Artem Babenko, Ivan Oseledets
Recent work demonstrated the benefits of studying continuous-time dynamics governing the GAN training.
no code implementations • ICLR 2021 • Stanislav Morozov, Andrey Voynov, Artem Babenko
The embeddings from CNNs pretrained on Imagenet classification are de-facto standard image representations for assessing GANs via FID, Precision and Recall measures.
no code implementations • 1 Jan 2021 • Max Ryabinin, Artem Babenko, Elena Voita
In this work, we make the first step towards unsupervised discovery of interpretable directions in language latent spaces.
2 code implementations • CVPR 2021 • Anton Cherepkov, Andrey Voynov, Artem Babenko
In contrast to existing works, which mostly operate by latent codes, we discover interpretable directions in the space of the generator parameters.
1 code implementation • 8 Jun 2020 • Andrey Voynov, Stanislav Morozov, Artem Babenko
The recent rise of unsupervised and self-supervised learning has dramatically reduced the dependency on labeled data, providing effective image representations for transfer to downstream vision tasks.
5 code implementations • ICLR 2020 • Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitriy Pyrkin, Sergei Popov, Artem Babenko
We empirically demonstrate the effectiveness of this method on large-scale image classification and machine translation tasks.
2 code implementations • ICML 2020 • Andrey Voynov, Artem Babenko
The latent spaces of GAN models often have semantically meaningful directions.
1 code implementation • 23 Dec 2019 • Andrey Voynov, Artem Babenko
In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) -- an alternative design of GANs that can serve as a tool for generative model analysis.
1 code implementation • 27 Nov 2019 • Dmitry Baranchuk, Artem Babenko
New algorithms for similarity graph construction are continuously being proposed and analyzed by both theoreticians and practitioners.
1 code implementation • NeurIPS 2019 • Denis Mazur, Vage Egiazarian, Stanislav Morozov, Artem Babenko
Our main contribution is PRODIGE: a method that learns a weighted graph representation of data end-to-end by gradient descent.
1 code implementation • 25 Sep 2019 • Andrey Voynov, Artem Babenko
In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis.
5 code implementations • ICLR 2020 • Sergei Popov, Stanislav Morozov, Artem Babenko
In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data.
1 code implementation • 19 Aug 2019 • Stanislav Morozov, Artem Babenko
In plenty of machine learning applications, the most relevant items for a particular query should be efficiently extracted, while the relevance function is based on a highly-nonlinear model, e. g., DNNs or GBDTs.
1 code implementation • ICCV 2019 • Stanislav Morozov, Artem Babenko
We tackle the problem of unsupervised visual descriptors compression, which is a key ingredient of large-scale image retrieval systems.
1 code implementation • 27 May 2019 • Dmitry Baranchuk, Dmitry Persiyanov, Anton Sinitsin, Artem Babenko
Recently similarity graphs became the leading paradigm for efficient nearest neighbor search, outperforming traditional tree-based and LSH-based methods.
1 code implementation • NeurIPS 2018 • Stanislav Morozov, Artem Babenko
In this paper we address the problem of Maximum Inner Product Search (MIPS) that is currently the computational bottleneck in a large number of machine learning applications.
no code implementations • 13 Jun 2018 • Vadim Lebedev, Artem Babenko, Victor Lempitsky
In this work we introduce impostor networks, an architecture that allows to perform fine-grained recognition with high accuracy and using a light-weight convolutional network, making it particularly suitable for fine-grained applications on low-power and non-GPU enabled platforms.
12 code implementations • ECCV 2018 • Dmitry Baranchuk, Artem Babenko, Yury Malkov
This work addresses the problem of billion-scale nearest neighbor search.
no code implementations • ICCV 2017 • Artem Babenko, Victor Lempitsky
To compress large datasets of high-dimensional descriptors, modern quantization schemes learn multiple codebooks and then represent individual descriptors as combinations of codewords.
no code implementations • CVPR 2017 • Artem Babenko, Victor Lempitsky
In this work, we introduce a new kind of spatial partition trees for efficient nearest-neighbor search.
no code implementations • 5 Jun 2016 • Artem Babenko, Relja Arandjelović, Victor Lempitsky
The proposed approach proceeds by finding a linear transformation of the data that effectively reduces the minimization of the pairwise distortions to the minimization of individual reconstruction errors.
no code implementations • CVPR 2016 • Artem Babenko, Victor Lempitsky
In this paper, we introduce a new dataset of one billion descriptors based on DNNs and reveal the relative inefficiency of IMI-based indexing for such descriptors compared to SIFT data.
no code implementations • ICCV 2015 • Artem Babenko, Victor Lempitsky
Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems.
Ranked #15 on Image Retrieval on RParis (Hard)
2 code implementations • 26 Oct 2015 • Artem Babenko, Victor Lempitsky
In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval.
Ranked #15 on Image Retrieval on RParis (Medium)
no code implementations • CVPR 2015 • Artem Babenko, Victor Lempitsky
We propose a new vector encoding scheme (tree quantization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming.
no code implementations • CVPR 2014 • Artem Babenko, Victor Lempitsky
We introduce a new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks.
1 code implementation • 7 Apr 2014 • Artem Babenko, Anton Slesarev, Alexandr Chigorin, Victor Lempitsky
In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e. g.\ Image-Net).
no code implementations • 7 Apr 2014 • Artem Babenko, Victor Lempitsky
Here we introduce and evaluate two approximate nearest neighbor search systems that both exploit the synergy of product quantization processes in a more efficient way.