no code implementations • 2 Feb 2024 • Daniel Bershatsky, Daria Cherniuk, Talgat Daulbaev, Aleksandr Mikhalev, Ivan Oseledets
In this paper we generalize and extend an idea of low-rank adaptation (LoRA) of large language models (LLMs) based on Transformer architecture.
1 code implementation • 15 Mar 2021 • Julia Gusak, Alexandr Katrutsa, Talgat Daulbaev, Andrzej Cichocki, Ivan Oseledets
Moreover, we show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
1 code implementation • ICLR Workshop DeepDiffEq 2019 • Julia Gusak, Larisa Markeeva, Talgat Daulbaev, Alexandr Katrutsa, Andrzej Cichocki, Ivan Oseledets
Normalization is an important and vastly investigated technique in deep learning.
1 code implementation • NeurIPS 2020 • Talgat Daulbaev, Alexandr Katrutsa, Larisa Markeeva, Julia Gusak, Andrzej Cichocki, Ivan Oseledets
We propose a simple interpolation-based method for the efficient approximation of gradients in neural ODE models.
1 code implementation • 29 Oct 2019 • Chunfeng Cui, Kaiqi Zhang, Talgat Daulbaev, Julia Gusak, Ivan Oseledets, Zheng Zhang
Secondly, we propose analyzing the vulnerability of a neural network using active subspace and finding an additive universal adversarial attack vector that can misclassify a dataset with a high probability.
no code implementations • 15 Oct 2019 • Julia Gusak, Talgat Daulbaev, Evgeny Ponomarev, Andrzej Cichocki, Ivan Oseledets
We introduce a new method for speeding up the inference of deep neural networks.
1 code implementation • 10 Nov 2017 • Alexandr Katrutsa, Talgat Daulbaev, Ivan Oseledets
This paper proposes the method to optimize restriction and prolongation operators in the two-grid method.
Numerical Analysis