Search Results for author: Ganesh Sistu

Found 33 papers, 6 papers with code

Multi-stream CNN based Video Semantic Segmentation for Automated Driving

no code implementations8 Jan 2019 Ganesh Sistu, Sumanth Chennupati, Senthil Yogamani

We propose two simple high-level architectures based on Recurrent FCN (RFCN) and Multi-Stream FCN (MSFCN) networks.

Semantic Segmentation Video Semantic Segmentation

Real-time Joint Object Detection and Semantic Segmentation Network for Automated Driving

no code implementations12 Jan 2019 Ganesh Sistu, Isabelle Leang, Senthil Yogamani

In this paper, we present a joint multi-task network design for learning object detection and semantic segmentation simultaneously.

Depth Estimation Object +5

AuxNet: Auxiliary tasks enhanced Semantic Segmentation for Automated Driving

no code implementations17 Jan 2019 Sumanth Chennupati, Ganesh Sistu, Senthil Yogamani, Samir Rawashdeh

Decision making in automated driving is highly specific to the environment and thus semantic segmentation plays a key role in recognizing the objects in the environment around the car.

Decision Making Depth Estimation +4

MultiNet++: Multi-Stream Feature Aggregation and Geometric Loss Strategy for Multi-Task Learning

no code implementations15 Apr 2019 Sumanth Chennupati, Ganesh Sistu, Senthil Yogamani, Samir A Rawashdeh

In this work, we propose a multi-stream multi-task network to take advantage of using feature representations from preceding frames in a video sequence for joint learning of segmentation, depth, and motion.

Autonomous Driving Multi-Task Learning

FuseMODNet: Real-Time Camera and LiDAR based Moving Object Detection for robust low-light Autonomous Driving

no code implementations11 Oct 2019 Hazem Rashed, Mohamed Ramzy, Victor Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani

In this work, we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing motion information from both camera and LiDAR sensors.

Autonomous Driving Moving Object Detection +2

FisheyeSuperPoint: Keypoint Detection and Description Network for Fisheye Images

no code implementations27 Feb 2021 Anna Konrad, Ciarán Eising, Ganesh Sistu, John McDonald, Rudi Villing, Senthil Yogamani

Keypoint detection and description is a commonly used building block in computer vision systems particularly for robotics and autonomous driving.

Autonomous Driving Homography Estimation +1

Woodscape Fisheye Semantic Segmentation for Autonomous Driving -- CVPR 2021 OmniCV Workshop Challenge

no code implementations17 Jul 2021 Saravanabalagi Ramachandran, Ganesh Sistu, John McDonald, Senthil Yogamani

This challenge served as a medium to investigate the challenges and new methodologies to handle the complexities with perception on fisheye images.

Autonomous Driving Semantic Segmentation

Neuroevolutionary Multi-objective approaches to Trajectory Prediction in Autonomous Vehicles

no code implementations4 May 2022 Fergal Stapleton, Edgar Galván, Ganesh Sistu, Senthil Yogamani

The incentive for using Evolutionary Algorithms (EAs) for the automated optimization and training of deep neural networks (DNNs), a process referred to as neuroevolution, has gained momentum in recent years.

Autonomous Vehicles Evolutionary Algorithms +1

ViT-BEVSeg: A Hierarchical Transformer Network for Monocular Birds-Eye-View Segmentation

1 code implementation31 May 2022 Pramit Dutta, Ganesh Sistu, Senthil Yogamani, Edgar Galván, John McDonald

In this paper, we evaluate the use of vision transformers (ViT) as a backbone architecture to generate BEV maps.

Segmentation

Fast and Efficient Scene Categorization for Autonomous Driving using VAEs

no code implementations26 Oct 2022 Saravanabalagi Ramachandran, Jonathan Horgan, Ganesh Sistu, John McDonald

We train a Variational Autoencoder in an unsupervised manner and map images to a constrained multi-dimensional latent space and use the latent vectors as compact embeddings that serve as global descriptors for images.

Autonomous Driving object-detection +4

Revisiting Modality Imbalance In Multimodal Pedestrian Detection

no code implementations24 Feb 2023 Arindam Das, Sudip Das, Ganesh Sistu, Jonathan Horgan, Ujjwal Bhattacharya, Edward Jones, Martin Glavin, Ciarán Eising

Multimodal learning, particularly for pedestrian detection, has recently received emphasis due to its capability to function equally well in several critical autonomous driving scenarios such as low-light, night-time, and adverse weather conditions.

Autonomous Driving Pedestrian Detection

Near Field iToF LIDAR Depth Improvement from Limited Number of Shots

no code implementations14 Apr 2023 Mena Nagiub, Thorsten Beuth, Ganesh Sistu, Heinrich Gotzig, Ciarán Eising

Indirect Time of Flight LiDARs can indirectly calculate the scene's depth from the phase shift angle between transmitted and received laser signals with amplitudes modulated at a predefined frequency.

Towards a performance analysis on pre-trained Visual Question Answering models for autonomous driving

1 code implementation18 Jul 2023 Kaavya Rekanar, Ciarán Eising, Ganesh Sistu, Martin Hayes

This short paper presents a preliminary analysis of three popular Visual Question Answering (VQA) models, namely ViLBERT, ViLT, and LXMERT, in the context of answering questions relating to driving scenarios.

Autonomous Driving Model Selection +2

Self-Supervised Online Camera Calibration for Automated Driving and Parking Applications

no code implementations16 Aug 2023 Ciarán Hogan, Ganesh Sistu, Ciarán Eising

The framework is self-supervised and doesn't require any labelling or supervision to learn the calibration parameters.

Autonomous Vehicles Camera Calibration

Fisheye Camera and Ultrasonic Sensor Fusion For Near-Field Obstacle Perception in Bird's-Eye-View

no code implementations1 Feb 2024 Arindam Das, Sudarshan Paul, Niko Scholz, Akhilesh Kumar Malviya, Ganesh Sistu, Ujjwal Bhattacharya, Ciarán Eising

Therefore, we present, to our knowledge, the first end-to-end multimodal fusion model tailored for efficient obstacle perception in a bird's-eye-view (BEV) perspective, utilizing fisheye cameras and ultrasonic sensors.

Autonomous Driving Sensor Fusion

Cannot find the paper you are looking for? You can Submit a new open access paper.