Search Results for author: Maxim Kodryan

Found 6 papers, 4 papers with code

Large Learning Rates Improve Generalization: But How Large Are We Talking About?

no code implementations19 Nov 2023 Ekaterina Lobacheva, Eduard Pockonechnyy, Maxim Kodryan, Dmitry Vetrov

Inspired by recent research that recommends starting neural networks training with large learning rates (LRs) to achieve the best generalization, we explore this hypothesis in detail.

Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes

1 code implementation8 Sep 2022 Maxim Kodryan, Ekaterina Lobacheva, Maksim Nakhodnov, Dmitry Vetrov

In this work, we investigate the properties of training scale-invariant neural networks directly on the sphere using a fixed ELR.

On Power Laws in Deep Ensembles

1 code implementation NeurIPS 2020 Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, Dmitry Vetrov

Ensembles of deep neural networks are known to achieve state-of-the-art performance in uncertainty estimation and lead to accuracy improvement.

MARS: Masked Automatic Ranks Selection in Tensor Decompositions

1 code implementation18 Jun 2020 Maxim Kodryan, Dmitry Kropotov, Dmitry Vetrov

Tensor decomposition methods have proven effective in various applications, including compression and acceleration of neural networks.

Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.