Search Results for author: Ekaterina Lobacheva

Found 12 papers, 5 papers with code

Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes

no code implementations8 Sep 2022 Maxim Kodryan, Ekaterina Lobacheva, Maksim Nakhodnov, Dmitry Vetrov

A fundamental property of deep learning normalization techniques, such as batch normalization, is making the pre-normalization parameters scale invariant.

Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems

no code implementations29 Dec 2021 Evgeny Bobrov, Sergey Troshin, Nadezhda Chirkova, Ekaterina Lobacheva, Sviatoslav Panchenko, Dmitry Vetrov, Dmitry Kropotov

Channel decoding, channel detection, channel assessment, and resource management for wireless multiple-input multiple-output (MIMO) systems are all examples of problems where machine learning (ML) can be successfully applied.

BIG-bench Machine Learning Management

On the Memorization Properties of Contrastive Learning

no code implementations21 Jul 2021 Ildus Sadrtdinov, Nadezhda Chirkova, Ekaterina Lobacheva

Memorization studies of deep neural networks (DNNs) help to understand what patterns and how do DNNs learn, and motivate improvements to DNN training approaches.

Contrastive Learning Self-Supervised Learning

On Power Laws in Deep Ensembles

1 code implementation NeurIPS 2020 Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, Dmitry Vetrov

Ensembles of deep neural networks are known to achieve state-of-the-art performance in uncertainty estimation and lead to accuracy improvement.

Deep Ensembles on a Fixed Memory Budget: One Wide Network or Several Thinner Ones?

no code implementations14 May 2020 Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov

In this work, we consider a fixed memory budget setting, and investigate, what is more effective: to train a single wide network, or to perform a memory split -- to train an ensemble of several thinner networks, with the same total number of parameters?

Structured Sparsification of Gated Recurrent Neural Networks

no code implementations13 Nov 2019 Ekaterina Lobacheva, Nadezhda Chirkova, Alexander Markovich, Dmitry Vetrov

Recently, a lot of techniques were developed to sparsify the weights of neural networks and to remove networks' structure units, e. g. neurons.

Language Modelling text-classification +1

Bayesian Sparsification of Gated Recurrent Neural Networks

1 code implementation NIPS Workshop CDNNRIA 2018 Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons.

Bayesian Compression for Natural Language Processing

3 code implementations EMNLP 2018 Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov

In natural language processing, a lot of the tasks are successfully solved with recurrent neural networks, but such models have a huge number of parameters.

Semantic embeddings for program behavior patterns

no code implementations10 Apr 2018 Alexander Chistyakov, Ekaterina Lobacheva, Arseny Kuznetsov, Alexey Romanenko

In this paper, we propose a new feature extraction technique for program execution logs.

Bayesian Sparsification of Recurrent Neural Networks

2 code implementations31 Jul 2017 Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights.

Language Modelling Sentiment Analysis

Joint Optimization of Segmentation and Color Clustering

no code implementations ICCV 2015 Ekaterina Lobacheva, Olga Veksler, Yuri Boykov

We propose to make clustering an integral part of segmentation, by including a new clustering term in the energy function.

Cannot find the paper you are looking for? You can Submit a new open access paper.