no code implementations • 4 May 2018 • Bogdan Penkovsky, Laurent Larger, Daniel Brunner
In this work, we propose a new approach towards the efficient optimization and implementation of reservoir computing hardware reducing the required domain expert knowledge and optimization effort.
no code implementations • 14 Nov 2017 • Julian Bueno, Sheler Maktoobi, Luc Froehly, Ingo Fischer, Maxime Jacquot, Laurent Larger, Daniel Brunner
Realizing photonic Neural Networks with numerous nonlinear nodes in a fully parallel and efficient learning hardware was lacking so far.
no code implementations • 13 Oct 2015 • Lyudmila Grigoryeva, Julie Henriques, Laurent Larger, Juan-Pablo Ortega
This paper addresses the reservoir design problem in the context of delay-based reservoir computers for multidimensional input signals, parallel architectures, and real-time multitasking.
no code implementations • 21 Jul 2019 • Nadezhda Semenova, Xavier Porte, Louis Andreoli, Maxime Jacquot, Laurent Larger, Daniel Brunner
The system under study consists of noisy linear nodes, and we investigate the signal-to-noise ratio at the network's outputs which is the upper limit to such a system's computing accuracy.
no code implementations • 23 Jul 2019 • Xavier Porte, Louis Andreoli, Maxime Jacquot, Laurent Larger, Daniel Brunner
However, important questions regarding impact of reservoir size and learning routines on the convergence-speed during learning remain unaddressed.
no code implementations • 17 Dec 2019 • Johnny Moughames, Xavier Porte, Michael Thiel, Gwenn Ulliac, Maxime Jacquot, Laurent Larger, Muamer Kadic, Daniel Brunner
Photonic waveguides are prime candidates for integrated and parallel photonic interconnects.
no code implementations • 27 Mar 2020 • Louis Andreoli, Xavier Porte, Stéphane Chrétien, Maxime Jacquot, Laurent Larger, Daniel Brunner
A high efficiency hardware integration of neural networks benefits from realizing nonlinearity, network connectivity and learning fully in a physical substrate.
no code implementations • 12 Mar 2021 • Nadezhda Semenova, Laurent Larger, Daniel Brunner
Here, we determine for the first time the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers.