no code implementations • 15 Mar 2023 • Federico Paredes-Vallés, Jesse Hagenaars, Julien Dupeyroux, Stein Stroobants, Yingfu Xu, Guido de Croon
Robotic experiments show a successful sim-to-real transfer of the fully learned neuromorphic pipeline.
no code implementations • 9 Mar 2023 • Federico Paredes-Vallés, Kirk Y. W. Scheper, Christophe De Wagter, Guido C. H. E. de Croon
Event cameras have recently gained significant traction since they open up new avenues for low-latency and low-power solutions to complex computer vision problems.
no code implementations • 24 Nov 2022 • YiLun Wu, Federico Paredes-Vallés, Guido C. H. E. de Croon
Inspired by frame-based methods, state-of-the-art event-based optical flow networks rely on the explicit construction of correlation volumes, which are expensive to compute and store, at the same time prohibiting them from estimating high-resolution flow.
no code implementations • 14 Sep 2022 • Rik J. Bouwmeester, Federico Paredes-Vallés, Guido C. H. E. de Croon
In this work, we present NanoFlowNet, a lightweight convolutional neural network for real-time dense optical flow estimation on edge computing hardware.
no code implementations • 30 Sep 2021 • Christophe De Wagter, Federico Paredes-Vallés, Nilay Sheth, Guido de Croon
Robotics is the next frontier in the progress of Artificial Intelligence (AI), as the real world in which robots operate represents an enormous, complex, continuous state space with inherent real-time requirements.
no code implementations • NeurIPS 2021 • Jesse Hagenaars, Federico Paredes-Vallés, Guido de Croon
We focus on the complex task of learning to estimate optical flow from event-based camera inputs in a self-supervised manner, and modify the state-of-the-art ANN training pipeline to encode minimal temporal information in its inputs.
no code implementations • 1 Nov 2020 • Julien Dupeyroux, Jesse Hagenaars, Federico Paredes-Vallés, Guido de Croon
However, a major challenge for using such processors on robotic platforms is the reality gap between simulation and the real world.
1 code implementation • 28 Jul 2018 • Federico Paredes-Vallés, Kirk Y. W. Scheper, Guido C. H. E. de Croon
Convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception, respectively; while global motion selectivity emerges in a final fully-connected layer.