In 2020-2021, we celebrated that many of the basic ideas behind the deep learning revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich.
UDRL generalizes to achieve high rewards or other goals, through input commands such as: get lots of reward within at most so much time!
I review unsupervised or self-supervised neural networks playing minimax games in game-theoretic settings: (i) Artificial Curiosity (AC, 1990) is based on two such networks.
The Differentiable Neural Computer (DNC) can learn algorithmic and question answering tasks.
Audiovisual speech recognition (AVSR) is a method to alleviate the adverse effect of noise in the acoustic signal.
Then ONE is retrained in PowerPlay style (2011) on stored input/output traces of (a) ONE's copy executing the new skill and (b) previous instances of ONE whose skills are still considered worth memorizing.
A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy.
We present a Lipreading system, i. e. a speech recognition system using only visual features, which uses domain-adversarial training for speaker independence.
The basic ideas of this report can be applied to many other cases where one RNN-like system exploits the algorithmic information content of another.
In contrast, Multi-Dimensional Recurrent NNs (MD-RNNs) can perceive the entire spatio-temporal context of each pixel in a few sweeps through all pixels, especially when the RNN is a Long Short-Term Memory (LSTM).
It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters.
Ranked #183 on Image Classification on CIFAR-10
In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning.
Do two data samples come from different distributions?
Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs.
Ranked #7 on Image Classification on MNIST
Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0. 35% error rate on the famous MNIST handwritten digits benchmark.
I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful.
Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition.
Artificial Intelligence (AI) has recently become a real formal science: the new millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents.
Is the universe computable?
Quantum Physics Computational Complexity Computers and Society Computational Physics Popular Physics