Search Results for author: Denis Dimitrov

Found 20 papers, 15 papers with code

Pixel-Level BPE for Auto-Regressive Image Generation

no code implementations MMMPIE (COLING) 2022 Anton Razzhigaev, Anton Voronov, Andrey Kaznacheev, Andrey Kuznetsov, Denis Dimitrov, Alexander Panchenko

Pixel-level autoregression with Transformer models (Image GPT or iGPT) is one of the recent approaches to image generation that has not received massive attention and elaboration due to quadratic complexity of attention as it imposes huge memory requirements and thus restricts the resolution of the generated images.

Image Generation

Your Transformer is Secretly Linear

1 code implementation19 May 2024 Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Goncharova, Nikolai Gerasimenko, Ivan Oseledets, Denis Dimitrov, Andrey Kuznetsov

This regularization improves performance metrics on benchmarks like Tiny Stories and SuperGLUE and as well successfully decreases the linearity of the models.

MERA: A Comprehensive LLM Evaluation in Russian

1 code implementation9 Jan 2024 Alena Fenogenova, Artem Chervyakov, Nikita Martynov, Anastasia Kozlova, Maria Tikhonova, Albina Akhmetgareeva, Anton Emelyanov, Denis Shevelev, Pavel Lebedev, Leonid Sinev, Ulyana Isaeva, Katerina Kolomeytseva, Daniil Moskovskiy, Elizaveta Goncharova, Nikita Savushkin, Polina Mikhailova, Denis Dimitrov, Alexander Panchenko, Sergei Markov

To address these issues, we introduce an open Multimodal Evaluation of Russian-language Architectures (MERA), a new instruction benchmark for evaluating foundation models oriented towards the Russian language.

Kandinsky 3.0 Technical Report

1 code implementation6 Dec 2023 Vladimir Arkhipkin, Andrei Filatov, Viacheslav Vasilev, Anastasia Maltseva, Said Azizov, Igor Pavlov, Julia Agafonova, Andrey Kuznetsov, Denis Dimitrov

We focus on the key components that, as we have identified as a result of a large number of experiments, had the most significant impact on improving the quality of our model compared to the others.

Text-to-Image Generation

FusionFrames: Efficient Architectural Aspects for Text-to-Video Generation Pipeline

1 code implementation22 Nov 2023 Vladimir Arkhipkin, Zein Shaheen, Viacheslav Vasilev, Elizaveta Dakhova, Andrey Kuznetsov, Denis Dimitrov

The first stage concerns keyframes synthesis to figure the storyline of a video, while the second one is devoted to interpolation frames generation to make movements of the scene and objects smooth.

SSIM Text-to-Video Generation +1

The Shape of Learning: Anisotropy and Intrinsic Dimensions in Transformer-Based Models

no code implementations10 Nov 2023 Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Goncharova, Ivan Oseledets, Denis Dimitrov, Andrey Kuznetsov

In this study, we present an investigation into the anisotropy dynamics and intrinsic dimension of embeddings in transformer architectures, focusing on the dichotomy between encoders and decoders.

RusTitW: Russian Language Text Dataset for Visual Text in-the-Wild Recognition

1 code implementation29 Mar 2023 Igor Markov, Sergey Nesteruk, Andrey Kuznetsov, Denis Dimitrov

In this paper, we present a large-scale human-labeled dataset for Russian text recognition in-the-wild.

Eco2AI: carbon emissions tracking of machine learning models as the first step towards sustainable AI

1 code implementation31 Jul 2022 Semen Budennyy, Vladimir Lazarev, Nikita Zakharenko, Alexey Korovin, Olga Plosskaya, Denis Dimitrov, Vladimir Arkhipkin, Ivan Oseledets, Ivan Barsola, Ilya Egorov, Aleksandra Kosterina, Leonid Zhukov

The size and complexity of deep neural networks continue to grow exponentially, significantly increasing energy consumption for training and inference by these models.

Survey on Large Scale Neural Network Training

no code implementations21 Feb 2022 Julia Gusak, Daria Cherniuk, Alena Shilova, Alexander Katrutsa, Daniel Bershatsky, Xunyi Zhao, Lionel Eyraud-Dubois, Oleg Shlyazhko, Denis Dimitrov, Ivan Oseledets, Olivier Beaumont

Modern Deep Neural Networks (DNNs) require significant memory to store weight, activations, and other intermediate tensors during training.

Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint Reduction

2 code implementations1 Feb 2022 Georgii Novikov, Daniel Bershatsky, Julia Gusak, Alex Shonenkov, Denis Dimitrov, Ivan Oseledets

Every modern neural network model has quite a few pointwise nonlinearities in its architecture, and such operation induces additional memory costs which -- as we show -- can be significantly reduced by quantization of the gradients.

Neural Network Compression Quantization

Handwritten text generation and strikethrough characters augmentation

no code implementations14 Dec 2021 Alex Shonenkov, Denis Karachev, Max Novopoltsev, Mark Potanin, Denis Dimitrov, Andrey Chertok

We introduce two data augmentation techniques, which, used with a Resnet-BiLSTM-CTC network, significantly reduce Word Error Rate (WER) and Character Error Rate (CER) beyond best-reported results on handwriting text recognition (HTR) tasks.

Data Augmentation HTR +1

Emojich -- zero-shot emoji generation using Russian language: a technical report

no code implementations4 Dec 2021 Alex Shonenkov, Daria Bakshandaeva, Denis Dimitrov, Aleksandr Nikolich

This technical report presents a text-to-image neural network "Emojich" that generates emojis using captions in Russian language as a condition.

Many Heads but One Brain: Fusion Brain -- a Competition and a Single Multimodal Multitask Architecture

1 code implementation22 Nov 2021 Daria Bakshandaeva, Denis Dimitrov, Vladimir Arkhipkin, Alex Shonenkov, Mark Potanin, Denis Karachev, Andrey Kuznetsov, Anton Voronov, Vera Davydova, Elena Tutubalina, Aleksandr Petiushko

Supporting the current trend in the AI community, we present the AI Journey 2021 Challenge called Fusion Brain, the first competition which is targeted to make the universal architecture which could process different modalities (in this case, images, texts, and code) and solve multiple tasks for vision and language.

Handwritten Text Recognition object-detection +4

Digital Peter: Dataset, Competition and Handwriting Recognition Methods

2 code implementations16 Mar 2021 Mark Potanin, Denis Dimitrov, Alex Shonenkov, Vladimir Bataev, Denis Karachev, Maxim Novopoltsev

This paper presents a new dataset of Peter the Great's manuscripts and describes a segmentation procedure that converts initial images of documents into the lines.

BIG-bench Machine Learning Handwriting Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.