Search Results for author: Alexander Kovalenko

Found 4 papers, 0 papers with code

Linear Self-Attention Approximation via Trainable Feedforward Kernel

no code implementations8 Nov 2022 Uladzislau Yorsh, Alexander Kovalenko

In pursuit of faster computation, Efficient Transformers demonstrate an impressive variety of approaches -- models attaining sub-quadratic attention complexity can utilize a notion of sparsity or a low-rank approximation of inputs to reduce the number of attended keys; other ways to reduce complexity include locality-sensitive hashing, key pooling, additional memory to store information in compacted or hybridization with other architectures, such as CNN.

Graph Neural Networks with Trainable Adjacency Matrices for Fault Diagnosis on Multivariate Sensor Data

no code implementations20 Oct 2022 Alexander Kovalenko, Vitaliy Pozdnyakov, Ilya Makarov

In this work, the possibility of applying graph neural networks to the problem of fault diagnosis in a chemical process is studied.

Chemical Process

SimpleTRON: Simple Transformer with O(N) Complexity

no code implementations23 Nov 2021 Uladzislau Yorsh, Alexander Kovalenko, Vojtěch Vančura, Daniel Vašata, Pavel Kordík, Tomáš Mikolov

In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in Transformer-based models, is redundant for the model performance.

Text Classification

Dynamic Neural Diversification: Path to Computationally Sustainable Neural Networks

no code implementations20 Sep 2021 Alexander Kovalenko, Pavel Kordík, Magda Friedjungová

However, such models face several problems during the learning process, mainly due to the redundancy of the individual neurons, which results in sub-optimal accuracy or the need for additional training steps.

Efficient Neural Network

Cannot find the paper you are looking for? You can Submit a new open access paper.