We train these networks to predict the dynamics of delay-dynamical and spatio-temporal systems for a single size.
Reservoir computers are powerful tools for chaotic time series prediction.
In this work, we show that the effectiveness of the internal fading memory depends significantly on the properties of the signal to be processed.
We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops.
Realizing photonic Neural Networks with numerous nonlinear nodes in a fully parallel and efficient learning hardware was lacking so far.
We perform physical experiments that demonstrate that the obtained input encodings work well in reality, and we show that optimized systems perform significantly better than the common Reservoir Computing approach.