no code implementations • 10 Jan 2024 • Camille Simon Chane, Ernst Niebur, Ryad Benosman, Sio-Hoi Ieng
The saliency map model, originally developed to understand the process of selective attention in the primate visual system, has also been extensively used in computer vision.
no code implementations • ICCV 2023 • Urbano Miguel Nunes, Laurent Udo Perrinet, Sio-Hoi Ieng
In this paper, we address the problem of time-to-contact (TTC) estimation using a single event camera.
1 code implementation • CVPR 2023 • Urbano Miguel Nunes, Ryad Benosman, Sio-Hoi Ieng
To achieve this, at least one of three main strategies is applied, namely: 1) constant temporal decay or fixed time window, 2) constant number of events, and 3) flow-based lifetime of events.
1 code implementation • 13 May 2022 • Gregor Lenz, Serge Picaud, Sio-Hoi Ieng
We present the first publicly available Android framework to stream data from an event camera directly to a mobile phone.
no code implementations • 27 Nov 2018 • Himanshu Akolkar, Sio-Hoi Ieng, Ryad Benosman
Optical flow is a crucial component of the feature space for early visual processing of dynamic scenes especially in new applications such as self-driving vehicles, drones and autonomous robots.
no code implementations • 19 Nov 2018 • Laurent Dardelet, Sio-Hoi Ieng, Ryad Benosman
This paper presents a new event-based method for detecting and tracking features from the output of an event-based camera.
no code implementations • 19 Nov 2018 • Marco Macanovic, Fabian Chersi, Felix Rutard, Sio-Hoi Ieng, Ryad Benosman
We introduce in this paper the principle of Deep Temporal Networks that allow to add time to convolutional networks by allowing deep integration principles not only using spatial information but also increasingly large temporal window.
no code implementations • 27 Mar 2018 • Gregor Lenz, Sio-Hoi Ieng, Ryad Benosman
We will rely on a new feature that has never been used for such a task that relies on detecting eye blinks.