no code implementations • 8 Nov 2022 • Uladzislau Yorsh, Alexander Kovalenko
In pursuit of faster computation, Efficient Transformers demonstrate an impressive variety of approaches -- models attaining sub-quadratic attention complexity can utilize a notion of sparsity or a low-rank approximation of inputs to reduce the number of attended keys; other ways to reduce complexity include locality-sensitive hashing, key pooling, additional memory to store information in compacted or hybridization with other architectures, such as CNN.
no code implementations • 20 Oct 2022 • Alexander Kovalenko, Vitaliy Pozdnyakov, Ilya Makarov
In this work, the possibility of applying graph neural networks to the problem of fault diagnosis in a chemical process is studied.
no code implementations • 23 Nov 2021 • Uladzislau Yorsh, Alexander Kovalenko, Vojtěch Vančura, Daniel Vašata, Pavel Kordík, Tomáš Mikolov
In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in Transformer-based models, is redundant for the model performance.
no code implementations • 20 Sep 2021 • Alexander Kovalenko, Pavel Kordík, Magda Friedjungová
However, such models face several problems during the learning process, mainly due to the redundancy of the individual neurons, which results in sub-optimal accuracy or the need for additional training steps.