no code implementations • 23 Nov 2021 • Uladzislau Yorsh, Alexander Kovalenko, Vojtěch Vančura, Daniel Vašata, Pavel Kordík, Tomáš Mikolov
In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in Transformer-based models, is redundant for the model performance.
no code implementations • 3 Aug 2021 • Barbora Hudcová, Tomáš Mikolov
To show that our approach can be applied to many different computational systems, we demonstrate the results of classifying cellular automata, Turing machines, and random Boolean networks.
no code implementations • 1 Aug 2021 • Barbora Hudcová, Tomáš Mikolov
We compute a graph showing which elementary cellular automata can be emulated by which and show that certain chaotic automata are the only ones that cannot emulate any automata non-trivially.