Search Results for author: Artem Babenko

Found 49 papers, 32 papers with code

QUASAR: QUality and Aesthetics Scoring with Advanced Representations

no code implementations11 Mar 2024 Sergey Kastryulin, Denis Prokopenko, Artem Babenko, Dmitry V. Dylov

This paper introduces a new data-driven, non-parametric method for image quality and aesthetics assessment, surpassing existing approaches and requiring no prompt engineering or fine-tuning.

Prompt Engineering

Extreme Compression of Large Language Models via Additive Quantization

1 code implementation11 Jan 2024 Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh

The emergence of accurate open large language models (LLMs) has led to a race towards quantization techniques for such models enabling execution on end-user devices.

Quantization

Your Student is Better Than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models

1 code implementation17 Dec 2023 Nikita Starodubcev, Artem Fedorov, Artem Babenko, Dmitry Baranchuk

While several powerful distillation methods were recently proposed, the overall quality of student samples is typically lower compared to the teacher ones, which hinders their practical usage.

Image Generation Knowledge Distillation +1

TabR: Tabular Deep Learning Meets Nearest Neighbors in 2023

1 code implementation26 Jul 2023 Yury Gorishniy, Ivan Rubachev, Nikolay Kartashev, Daniil Shlenskii, Akim Kotelnikov, Artem Babenko

Deep learning (DL) models for tabular data problems (e. g. classification, regression) are currently receiving increasingly more attention from researchers.

Retrieval

Towards Real-time Text-driven Image Manipulation with Unconditional Diffusion Models

1 code implementation10 Apr 2023 Nikita Starodubcev, Dmitry Baranchuk, Valentin Khrulkov, Artem Babenko

Finally, we show that our approach can adapt the pretrained model to the user-specified image and text description on the fly just for 4 seconds.

Image Manipulation

A critical look at the evaluation of GNNs under heterophily: Are we really making progress?

2 code implementations22 Feb 2023 Oleg Platonov, Denis Kuznedelev, Michael Diskin, Artem Babenko, Liudmila Prokhorenkova

Graphs without this property are called heterophilous, and it is typically assumed that specialized methods are required to achieve strong performance on such graphs.

Graph Representation Learning Node Classification

Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics

1 code implementation NeurIPS 2023 Anton Voronov, Mikhail Khoroshikh, Artem Babenko, Max Ryabinin

Text-to-image generation models represent the next step of evolution in image synthesis, offering a natural way to achieve flexible yet fine-grained control over the result.

Text-to-Image Generation

TabDDPM: Modelling Tabular Data with Diffusion Models

3 code implementations30 Sep 2022 Akim Kotelnikov, Dmitry Baranchuk, Ivan Rubachev, Artem Babenko

Denoising diffusion probabilistic models are currently becoming the leading paradigm of generative modeling for many important data modalities.

Denoising

Revisiting Pretraining Objectives for Tabular Deep Learning

2 code implementations7 Jul 2022 Ivan Rubachev, Artem Alekberov, Yury Gorishniy, Artem Babenko

Recent deep learning models for tabular data currently compete with the traditional ML models based on decision trees (GBDT).

On Embeddings for Numerical Features in Tabular Deep Learning

4 code implementations10 Mar 2022 Yury Gorishniy, Ivan Rubachev, Artem Babenko

We start by describing two conceptually different approaches to building embedding modules: the first one is based on a piecewise linear encoding of scalar values, and the second one utilizes periodic activations.

When, Why, and Which Pretrained GANs Are Useful?

1 code implementation ICLR 2022 Timofey Grigoryev, Andrey Voynov, Artem Babenko

The literature has proposed several methods to finetune pretrained GANs on new datasets, which typically results in higher performance compared to training from scratch, especially in the limited-data regime.

Label-Efficient Semantic Segmentation with Diffusion Models

1 code implementation ICLR 2022 Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, Artem Babenko

Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance.

Denoising Segmentation +2

Latent Transformations via NeuralODEs for GAN-based Image Editing

1 code implementation ICCV 2021 Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, Artem Babenko

Recent advances in high-fidelity semantic image editing heavily rely on the presumably disentangled latent spaces of the state-of-the-art generative models, such as StyleGAN.

Attribute

Distilling the Knowledge from Conditional Normalizing Flows

1 code implementation ICML Workshop INNF 2021 Dmitry Baranchuk, Vladimir Aliev, Artem Babenko

Normalizing flows are a powerful class of generative models demonstrating strong performance in several speech and vision problems.

Image Super-Resolution Speech Synthesis

Revisiting Deep Learning Models for Tabular Data

11 code implementations NeurIPS 2021 Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, Artem Babenko

The existing literature on deep learning for tabular data proposes a wide range of novel architectures and reports competitive results on various datasets.

Neural Side-by-Side: Predicting Human Preferences for No-Reference Super-Resolution Evaluation

1 code implementation CVPR 2021 Valentin Khrulkov, Artem Babenko

Given the dataset and the labels, we trained a CNN model that obtains a pair of images and for each image predicts a probability of being more preferable than its counterpart.

SSIM Super-Resolution

Discovering Weight Initializers with Meta Learning

1 code implementation ICML Workshop AutoML 2021 Dmitry Baranchuk, Artem Babenko

In this study, we propose a task-agnostic approach that discovers initializers for specific network architectures and optimizers by learning the initial weight distributions directly through the use of Meta-Learning.

Meta-Learning

Disentangled Representations from Non-Disentangled Models

no code implementations11 Feb 2021 Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, Artem Babenko

Constructing disentangled representations is known to be a difficult task, especially in the unsupervised scenario.

Disentanglement Fairness

Functional Space Analysis of Local GAN Convergence

no code implementations8 Feb 2021 Valentin Khrulkov, Artem Babenko, Ivan Oseledets

Recent work demonstrated the benefits of studying continuous-time dynamics governing the GAN training.

Data Augmentation

On Self-Supervised Image Representations for GAN Evaluation

no code implementations ICLR 2021 Stanislav Morozov, Andrey Voynov, Artem Babenko

The embeddings from CNNs pretrained on Imagenet classification are de-facto standard image representations for assessing GANs via FID, Precision and Recall measures.

Contrastive Learning General Classification

Unsupervised Discovery of Interpretable Latent Manipulations in Language VAEs

no code implementations1 Jan 2021 Max Ryabinin, Artem Babenko, Elena Voita

In this work, we make the first step towards unsupervised discovery of interpretable directions in language latent spaces.

Sentence Text Generation

Navigating the GAN Parameter Space for Semantic Image Editing

2 code implementations CVPR 2021 Anton Cherepkov, Andrey Voynov, Artem Babenko

In contrast to existing works, which mostly operate by latent codes, we discover interpretable directions in the space of the generator parameters.

Image Restoration Image-to-Image Translation +1

Object Segmentation Without Labels with Large-Scale Generative Models

1 code implementation8 Jun 2020 Andrey Voynov, Stanislav Morozov, Artem Babenko

The recent rise of unsupervised and self-supervised learning has dramatically reduced the dependency on labeled data, providing effective image representations for transfer to downstream vision tasks.

Image Classification Object +5

Editable Neural Networks

5 code implementations ICLR 2020 Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitriy Pyrkin, Sergei Popov, Artem Babenko

We empirically demonstrate the effectiveness of this method on large-scale image classification and machine translation tasks.

Face Identification General Classification +4

RPGAN: GANs Interpretability via Random Routing

1 code implementation23 Dec 2019 Andrey Voynov, Artem Babenko

In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) -- an alternative design of GANs that can serve as a tool for generative model analysis.

Generative Adversarial Network Image Generation +1

Towards Similarity Graphs Constructed by Deep Reinforcement Learning

1 code implementation27 Nov 2019 Dmitry Baranchuk, Artem Babenko

New algorithms for similarity graph construction are continuously being proposed and analyzed by both theoreticians and practitioners.

graph construction reinforcement-learning +1

RPGAN: random paths as a latent space for GAN interpretability

1 code implementation25 Sep 2019 Andrey Voynov, Artem Babenko

In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis.

Generative Adversarial Network Image Generation +1

Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data

5 code implementations ICLR 2020 Sergei Popov, Stanislav Morozov, Artem Babenko

In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data.

BIG-bench Machine Learning Representation Learning

Relevance Proximity Graphs for Fast Relevance Retrieval

1 code implementation19 Aug 2019 Stanislav Morozov, Artem Babenko

In plenty of machine learning applications, the most relevant items for a particular query should be efficiently extracted, while the relevance function is based on a highly-nonlinear model, e. g., DNNs or GBDTs.

Retrieval

Unsupervised Neural Quantization for Compressed-Domain Similarity Search

1 code implementation ICCV 2019 Stanislav Morozov, Artem Babenko

We tackle the problem of unsupervised visual descriptors compression, which is a key ingredient of large-scale image retrieval systems.

Image Retrieval Quantization +1

Learning to Route in Similarity Graphs

1 code implementation27 May 2019 Dmitry Baranchuk, Dmitry Persiyanov, Anton Sinitsin, Artem Babenko

Recently similarity graphs became the leading paradigm for efficient nearest neighbor search, outperforming traditional tree-based and LSH-based methods.

Non-metric Similarity Graphs for Maximum Inner Product Search

1 code implementation NeurIPS 2018 Stanislav Morozov, Artem Babenko

In this paper we address the problem of Maximum Inner Product Search (MIPS) that is currently the computational bottleneck in a large number of machine learning applications.

Impostor Networks for Fast Fine-Grained Recognition

no code implementations13 Jun 2018 Vadim Lebedev, Artem Babenko, Victor Lempitsky

In this work we introduce impostor networks, an architecture that allows to perform fine-grained recognition with high accuracy and using a light-weight convolutional network, making it particularly suitable for fine-grained applications on low-power and non-GPU enabled platforms.

AnnArbor: Approximate Nearest Neighbors Using Arborescence Coding

no code implementations ICCV 2017 Artem Babenko, Victor Lempitsky

To compress large datasets of high-dimensional descriptors, modern quantization schemes learn multiple codebooks and then represent individual descriptors as combinations of codewords.

Quantization

Product Split Trees

no code implementations CVPR 2017 Artem Babenko, Victor Lempitsky

In this work, we introduce a new kind of spatial partition trees for efficient nearest-neighbor search.

Clustering Quantization

Pairwise Quantization

no code implementations5 Jun 2016 Artem Babenko, Relja Arandjelović, Victor Lempitsky

The proposed approach proceeds by finding a linear transformation of the data that effectively reduces the minimization of the pairwise distortions to the minimization of individual reconstruction errors.

Quantization

Efficient Indexing of Billion-Scale Datasets of Deep Descriptors

no code implementations CVPR 2016 Artem Babenko, Victor Lempitsky

In this paper, we introduce a new dataset of one billion descriptors based on DNNs and reveal the relative inefficiency of IMI-based indexing for such descriptors compared to SIFT data.

Retrieval

Aggregating Local Deep Features for Image Retrieval

no code implementations ICCV 2015 Artem Babenko, Victor Lempitsky

Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems.

Image Classification Image Retrieval +1

Aggregating Deep Convolutional Features for Image Retrieval

2 code implementations26 Oct 2015 Artem Babenko, Victor Lempitsky

In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval.

Image Classification Image Retrieval +1

Tree Quantization for Large-Scale Similarity Search and Classification

no code implementations CVPR 2015 Artem Babenko, Victor Lempitsky

We propose a new vector encoding scheme (tree quantization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming.

Classification General Classification +3

Additive Quantization for Extreme Vector Compression

no code implementations CVPR 2014 Artem Babenko, Victor Lempitsky

We introduce a new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks.

General Classification Image Classification +1

Neural Codes for Image Retrieval

1 code implementation7 Apr 2014 Artem Babenko, Anton Slesarev, Alexandr Chigorin, Victor Lempitsky

In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e. g.\ Image-Net).

Dimensionality Reduction Image Retrieval +1

Improving Bilayer Product Quantization for Billion-Scale Approximate Nearest Neighbors in High Dimensions

no code implementations7 Apr 2014 Artem Babenko, Victor Lempitsky

Here we introduce and evaluate two approximate nearest neighbor search systems that both exploit the synergy of product quantization processes in a more efficient way.

Data Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.