2 code implementations • 31 Jan 2022 • Daniel Bershatsky, Aleksandr Mikhalev, Alexandr Katrutsa, Julia Gusak, Daniil Merkulov, Ivan Oseledets
Also, we investigate the variance of the gradient estimate induced by the randomized matrix multiplication.
1 code implementation • 15 Mar 2021 • Julia Gusak, Alexandr Katrutsa, Talgat Daulbaev, Andrzej Cichocki, Ivan Oseledets
Moreover, we show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
1 code implementation • 14 Jul 2020 • Alexandr Katrutsa, Daniil Merkulov, Nurislam Tursynbek, Ivan Oseledets
This descent direction is based on the normalized gradients of the individual losses.
1 code implementation • ICLR Workshop DeepDiffEq 2019 • Julia Gusak, Larisa Markeeva, Talgat Daulbaev, Alexandr Katrutsa, Andrzej Cichocki, Ivan Oseledets
Normalization is an important and vastly investigated technique in deep learning.
1 code implementation • NeurIPS 2020 • Talgat Daulbaev, Alexandr Katrutsa, Larisa Markeeva, Julia Gusak, Andrzej Cichocki, Ivan Oseledets
We propose a simple interpolation-based method for the efficient approximation of gradients in neural ODE models.
1 code implementation • 5 Mar 2019 • Alexandr Katrutsa, Ivan Oseledets
Therefore, to reduce this complexity, we use random sketching and compare it with the Kaczmarz method without preconditioning.
Numerical Analysis
1 code implementation • 10 Nov 2017 • Alexandr Katrutsa, Talgat Daulbaev, Ivan Oseledets
This paper proposes the method to optimize restriction and prolongation operators in the two-grid method.
Numerical Analysis