no code implementations • 9 Mar 2022 • Ahmed Rida Sekkat, Yohan Dupuis, Varun Ravi Kumar, Hazem Rashed, Senthil Yogamani, Pascal Vasseur, Paul Honeine
In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it.
no code implementations • 8 Nov 2021 • Sambit Mohapatra, Mona Hodaei, Senthil Yogamani, Stefan Milz, Heinrich Gotzig, Martin Simon, Hazem Rashed, Patrick Maeder
To the best of our knowledge, this is the first work directly performing motion segmentation in LiDAR BEV space.
no code implementations • 11 Jul 2021 • Hazem Rashed, Mariam Essam, Maha Mohamed, Ahmad El Sallab, Senthil Yogamani
In this work, we explore end-to-end Moving Object Detection (MOD) on the BEV map directly using monocular images as input.
no code implementations • 22 Apr 2021 • Hazem Rashed, Ahmad El Sallab, Senthil Yogamani
In this work, we aim to leverage the vehicle motion information and feed it into the model to have an adaptation mechanism based on ego-motion.
1 code implementation • 15 Feb 2021 • Varun Ravi Kumar, Senthil Yogamani, Hazem Rashed, Ganesh Sistu, Christian Witt, Isabelle Leang, Stefan Milz, Patrick Mäder
We obtain the state-of-the-art results on KITTI for depth estimation and pose estimation tasks and competitive performance on the other tasks.
no code implementations • 3 Dec 2020 • Hazem Rashed, Eslam Mohamed, Ganesh Sistu, Varun Ravi Kumar, Ciaran Eising, Ahmad El-Sallab, Senthil Yogamani
It is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios to the best of our knowledge.
no code implementations • 16 Aug 2020 • Eslam Mohamed, Mahmoud Ewaisha, Mennatullah Siam, Hazem Rashed, Senthil Yogamani, Waleed Hamdy, Muhammad Helmi, Ahmad El-Sallab
Moving object segmentation is a crucial task for autonomous vehicles as it can be used to segment objects in a class agnostic manner based on their motion cues.
no code implementations • 23 Dec 2019 • Pullarao Maddu, Wayne Doherty, Ganesh Sistu, Isabelle Leang, Michal Uricar, Sumanth Chennupati, Hazem Rashed, Jonathan Horgan, Ciaran Hughes, Senthil Yogamani
We provide a holistic overview of an industrial system covering the embedded system, use cases and the deep learning architecture.
no code implementations • 4 Dec 2019 • Michal Uricar, Ganesh Sistu, Hazem Rashed, Antonin Vobecky, Varun Ravi Kumar, Pavel Krizek, Fabian Burger, Senthil Yogamani
We propose a novel GAN based algorithm for generating unseen patterns of soiled images.
no code implementations • 1 Dec 2019 • Mohamed Ramzy, Hazem Rashed, Ahmad El Sallab, Senthil Yogamani
The trajectory of the ego-vehicle is planned based on the future states of detected moving objects.
no code implementations • 11 Oct 2019 • Hazem Rashed, Mohamed Ramzy, Victor Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani
In this work, we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing motion information from both camera and LiDAR sensors.
no code implementations • 30 Aug 2019 • Marie Yahiaoui, Hazem Rashed, Letizia Mariotti, Ganesh Sistu, Ian Clancy, Lucie Yahiaoui, Varun Ravi Kumar, Senthil Yogamani
In this work, we propose a CNN architecture for moving object detection using fisheye images that were captured in autonomous driving environment.
no code implementations • 1 Jun 2019 • Khaled El Madawy, Hazem Rashed, Ahmad El Sallab, Omar Nasr, Hanan Kamel, Senthil Yogamani
Motivated by the fact that semantic segmentation is a mature algorithm on image data, we explore sensor fusion based 3D segmentation.
1 code implementation • ICCV 2019 • Senthil Yogamani, Ciaran Hughes, Jonathan Horgan, Ganesh Sistu, Padraig Varley, Derek O'Dea, Michal Uricar, Stefan Milz, Martin Simon, Karl Amende, Christian Witt, Hazem Rashed, Sumanth Chennupati, Sanjaya Nayak, Saquib Mansoor, Xavier Perroton, Patrick Perez
Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications.
no code implementations • 11 Jan 2019 • Hazem Rashed, Senthil Yogamani, Ahmad El-Sallab, Pavel Krizek, Mohamed El-Helw
We also make use of the ground truth optical flow in Virtual KITTI to serve as an ideal estimator and a standard Farneback optical flow algorithm to study the effect of noise.