Search Results for author: Sio-Hoi Ieng

Found 8 papers, 2 papers with code

Adaptive Global Decay Process for Event Cameras

1 code implementation CVPR 2023 Urbano Miguel Nunes, Ryad Benosman, Sio-Hoi Ieng

To achieve this, at least one of three main strategies is applied, namely: 1) constant temporal decay or fixed time window, 2) constant number of events, and 3) flow-based lifetime of events.

Event-based vision

A Framework for Event-based Computer Vision on a Mobile Device

1 code implementation13 May 2022 Gregor Lenz, Serge Picaud, Sio-Hoi Ieng

We present the first publicly available Android framework to stream data from an event camera directly to a mobile phone.

Face Detection Gesture Recognition +2

Event-based Face Detection and Tracking in the Blink of an Eye

no code implementations27 Mar 2018 Gregor Lenz, Sio-Hoi Ieng, Ryad Benosman

We will rely on a new feature that has never been used for such a task that relies on detecting eye blinks.

Face Detection Position

Event-Based Features Selection and Tracking from Intertwined Estimation of Velocity and Generative Contours

no code implementations19 Nov 2018 Laurent Dardelet, Sio-Hoi Ieng, Ryad Benosman

This paper presents a new event-based method for detecting and tracking features from the output of an event-based camera.

When Conventional machine learning meets neuromorphic engineering: Deep Temporal Networks (DTNets) a machine learning frawmework allowing to operate on Events and Frames and implantable on Tensor Flow Like Hardware

no code implementations19 Nov 2018 Marco Macanovic, Fabian Chersi, Felix Rutard, Sio-Hoi Ieng, Ryad Benosman

We introduce in this paper the principle of Deep Temporal Networks that allow to add time to convolutional networks by allowing deep integration principles not only using spatial information but also increasingly large temporal window.

BIG-bench Machine Learning

Real-time high speed motion prediction using fast aperture-robust event-driven visual flow

no code implementations27 Nov 2018 Himanshu Akolkar, Sio-Hoi Ieng, Ryad Benosman

Optical flow is a crucial component of the feature space for early visual processing of dynamic scenes especially in new applications such as self-driving vehicles, drones and autonomous robots.

Motion Estimation motion prediction +1

An event-based implementation of saliency-based visual attention for rapid scene analysis

no code implementations10 Jan 2024 Camille Simon Chane, Ernst Niebur, Ryad Benosman, Sio-Hoi Ieng

The saliency map model, originally developed to understand the process of selective attention in the primate visual system, has also been extensively used in computer vision.

Cannot find the paper you are looking for? You can Submit a new open access paper.