Search Results for author: Ekaterina Lobacheva

Found 14 papers, 7 papers with code

Large Learning Rates Improve Generalization: But How Large Are We Talking About?

no code implementations19 Nov 2023 Ekaterina Lobacheva, Eduard Pockonechnyy, Maxim Kodryan, Dmitry Vetrov

Inspired by recent research that recommends starting neural networks training with large learning rates (LRs) to achieve the best generalization, we explore this hypothesis in detail.

Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes

1 code implementation8 Sep 2022 Maxim Kodryan, Ekaterina Lobacheva, Maksim Nakhodnov, Dmitry Vetrov

In this work, we investigate the properties of training scale-invariant neural networks directly on the sphere using a fixed ELR.

Machine Learning Methods for Spectral Efficiency Prediction in Massive MIMO Systems

no code implementations29 Dec 2021 Evgeny Bobrov, Sergey Troshin, Nadezhda Chirkova, Ekaterina Lobacheva, Sviatoslav Panchenko, Dmitry Vetrov, Dmitry Kropotov

Channel decoding, channel detection, channel assessment, and resource management for wireless multiple-input multiple-output (MIMO) systems are all examples of problems where machine learning (ML) can be successfully applied.

BIG-bench Machine Learning Management

On the Memorization Properties of Contrastive Learning

no code implementations21 Jul 2021 Ildus Sadrtdinov, Nadezhda Chirkova, Ekaterina Lobacheva

Memorization studies of deep neural networks (DNNs) help to understand what patterns and how do DNNs learn, and motivate improvements to DNN training approaches.

Contrastive Learning Memorization +1

On Power Laws in Deep Ensembles

1 code implementation NeurIPS 2020 Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, Dmitry Vetrov

Ensembles of deep neural networks are known to achieve state-of-the-art performance in uncertainty estimation and lead to accuracy improvement.

Deep Ensembles on a Fixed Memory Budget: One Wide Network or Several Thinner Ones?

no code implementations14 May 2020 Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov

In this work, we consider a fixed memory budget setting, and investigate, what is more effective: to train a single wide network, or to perform a memory split -- to train an ensemble of several thinner networks, with the same total number of parameters?

Structured Sparsification of Gated Recurrent Neural Networks

no code implementations13 Nov 2019 Ekaterina Lobacheva, Nadezhda Chirkova, Alexander Markovich, Dmitry Vetrov

Recently, a lot of techniques were developed to sparsify the weights of neural networks and to remove networks' structure units, e. g. neurons.

Language Modelling text-classification +1

Bayesian Sparsification of Gated Recurrent Neural Networks

1 code implementation NIPS Workshop CDNNRIA 2018 Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons.

Bayesian Compression for Natural Language Processing

3 code implementations EMNLP 2018 Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov

In natural language processing, a lot of the tasks are successfully solved with recurrent neural networks, but such models have a huge number of parameters.

Semantic embeddings for program behavior patterns

no code implementations10 Apr 2018 Alexander Chistyakov, Ekaterina Lobacheva, Arseny Kuznetsov, Alexey Romanenko

In this paper, we propose a new feature extraction technique for program execution logs.

Bayesian Sparsification of Recurrent Neural Networks

2 code implementations31 Jul 2017 Ekaterina Lobacheva, Nadezhda Chirkova, Dmitry Vetrov

Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights.

Language Modelling Sentiment Analysis

Joint Optimization of Segmentation and Color Clustering

no code implementations ICCV 2015 Ekaterina Lobacheva, Olga Veksler, Yuri Boykov

We propose to make clustering an integral part of segmentation, by including a new clustering term in the energy function.

Clustering Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.