Search Results for author: Senthil Yogamani

Found 80 papers, 9 papers with code

LetsMap: Unsupervised Representation Learning for Semantic BEV Mapping

no code implementations29 May 2024 Nikhil Gosala, Kürsat Petek, B Ravi Kiran, Senthil Yogamani, Paulo Drews-Jr, Wolfram Burgard, Abhinav Valada

Our approach pretrains the network to independently reason about scene geometry and scene semantics using two disjoint neural pathways in an unsupervised manner and then finetunes it for the task of semantic BEV mapping using only a small fraction of labels in the BEV.

Autonomous Driving Decision Making +1

FisheyeDetNet: 360° Surround view Fisheye Camera based Object Detection System for Autonomous Driving

no code implementations20 Apr 2024 Ganesh Sistu, Senthil Yogamani

To the best of our knowledge, this is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios.

Autonomous Driving Instance Segmentation +5

DaF-BEVSeg: Distortion-aware Fisheye Camera based Bird's Eye View Segmentation with Occlusion Reasoning

no code implementations9 Apr 2024 Senthil Yogamani, David Unger, Venkatraman Narayanan, Varun Ravi Kumar

We implement a baseline by applying cylindrical rectification on the fisheye images and using a standard LSS-based BEV segmentation model.

Scene Understanding Segmentation +1

BEVCar: Camera-Radar Fusion for BEV Map and Object Segmentation

1 code implementation18 Mar 2024 Jonas Schramm, Niclas Vödisch, Kürsat Petek, B Ravi Kiran, Senthil Yogamani, Wolfram Burgard, Abhinav Valada

Semantic scene segmentation from a bird's-eye-view (BEV) perspective plays a crucial role in facilitating planning and decision-making for mobile robots.

Decision Making Scene Segmentation +1

Neural Rendering based Urban Scene Reconstruction for Autonomous Driving

no code implementations9 Feb 2024 Shihao Shen, Louis Kerofsky, Varun Ravi Kumar, Senthil Yogamani

In particular, our method estimates dense and accurate 3D structures and creates an implicit map representation based on signed distance fields, which can be further rendered into RGB images, and depth maps.

3D Object Detection 3D Reconstruction +5

Multi-camera Bird's Eye View Perception for Autonomous Driving

no code implementations16 Sep 2023 David Unger, Nikhil Gosala, Varun Ravi Kumar, Shubhankar Borse, Abhinav Valada, Senthil Yogamani

Surround vision systems that are pretty common in new vehicles use the IPM principle to generate a BEV image and to show it on display to the driver.

Autonomous Driving Sensor Fusion

LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving

no code implementations17 Jul 2023 Sambit Mohapatra, Senthil Yogamani, Varun Ravi Kumar, Stefan Milz, Heinrich Gotzig, Patrick Mäder

We achieve state-of-the-art results for two tasks, semantic and motion segmentation, and close to state-of-the-art performance for 3D object detection.

3D Object Detection Autonomous Driving +7

X-Align++: cross-modal cross-view alignment for Bird's-eye-view segmentation

no code implementations6 Jun 2023 Shubhankar Borse, Senthil Yogamani, Marvin Klingner, Varun Ravi, Hong Cai, Abdulaziz Almuzairee, Fatih Porikli

Bird's-eye-view (BEV) grid is a typical representation of the perception of road components, e. g., drivable area, in autonomous driving.

Autonomous Driving Segmentation

X$^3$KD: Knowledge Distillation Across Modalities, Tasks and Stages for Multi-Camera 3D Object Detection

no code implementations3 Mar 2023 Marvin Klingner, Shubhankar Borse, Varun Ravi Kumar, Behnaz Rezaei, Venkatraman Narayanan, Senthil Yogamani, Fatih Porikli

Specifically, we propose cross-task distillation from an instance segmentation teacher (X-IS) in the PV feature extraction stage providing supervision without ambiguous error backpropagation through the view transformation.

3D Object Detection Instance Segmentation +3

X3KD: Knowledge Distillation Across Modalities, Tasks and Stages for Multi-Camera 3D Object Detection

no code implementations CVPR 2023 Marvin Klingner, Shubhankar Borse, Varun Ravi Kumar, Behnaz Rezaei, Venkatraman Narayanan, Senthil Yogamani, Fatih Porikli

Specifically, we propose cross-task distillation from an instance segmentation teacher (X-IS) in the PV feature extraction stage providing supervision without ambiguous error backpropagation through the view transformation.

3D Object Detection Instance Segmentation +3

X-Align: Cross-Modal Cross-View Alignment for Bird's-Eye-View Segmentation

no code implementations13 Oct 2022 Shubhankar Borse, Marvin Klingner, Varun Ravi Kumar, Hong Cai, Abdulaziz Almuzairee, Senthil Yogamani, Fatih Porikli

Bird's-eye-view (BEV) grid is a common representation for the perception of road components, e. g., drivable area, in autonomous driving.

Autonomous Driving Segmentation

SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving

no code implementations6 Jun 2022 Sambit Mohapatra, Thomas Mesquida, Mona Hodaei, Senthil Yogamani, Heinrich Gotzig, Patrick Mader

Spiking Neural Networks are a recent and new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency.

3D Object Detection Autonomous Driving +2

ViT-BEVSeg: A Hierarchical Transformer Network for Monocular Birds-Eye-View Segmentation

1 code implementation31 May 2022 Pramit Dutta, Ganesh Sistu, Senthil Yogamani, Edgar Galván, John McDonald

In this paper, we evaluate the use of vision transformers (ViT) as a backbone architecture to generate BEV maps.

Decoder Segmentation

Neuroevolutionary Multi-objective approaches to Trajectory Prediction in Autonomous Vehicles

no code implementations4 May 2022 Fergal Stapleton, Edgar Galván, Ganesh Sistu, Senthil Yogamani

The incentive for using Evolutionary Algorithms (EAs) for the automated optimization and training of deep neural networks (DNNs), a process referred to as neuroevolution, has gained momentum in recent years.

Autonomous Vehicles Evolutionary Algorithms +1

UnShadowNet: Illumination Critic Guided Contrastive Learning For Shadow Removal

no code implementations29 Mar 2022 Subhrajyoti Dasgupta, Arindam Das, Senthil Yogamani, Sudip Das, Ciaran Eising, Andrei Bursuc, Ujjwal Bhattacharya

Shadows are frequently encountered natural phenomena that significantly hinder the performance of computer vision perception systems in practical settings, e. g., autonomous driving.

Autonomous Driving Contrastive Learning +1

Detecting Adversarial Perturbations in Multi-Task Perception

1 code implementation2 Mar 2022 Marvin Klingner, Varun Ravi Kumar, Senthil Yogamani, Andreas Bär, Tim Fingscheidt

In this paper, we (i) propose a novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks (i. e., depth estimation and semantic segmentation).

Adversarial Attack Depth Estimation +1

A Hybrid Sparse-Dense Monocular SLAM System for Autonomous Driving

1 code implementation17 Aug 2021 Louis Gallagher, Varun Ravi Kumar, Senthil Yogamani, John B. McDonald

In this paper, we present a system for incrementally reconstructing a dense 3D model of the geometry of an outdoor environment using a single monocular camera attached to a moving vehicle.

Autonomous Driving Depth Estimation +3

Woodscape Fisheye Semantic Segmentation for Autonomous Driving -- CVPR 2021 OmniCV Workshop Challenge

no code implementations17 Jul 2021 Saravanabalagi Ramachandran, Ganesh Sistu, John McDonald, Senthil Yogamani

This challenge served as a medium to investigate the challenges and new methodologies to handle the complexities with perception on fisheye images.

Autonomous Driving Semantic Segmentation

An Online Learning System for Wireless Charging Alignment using Surround-view Fisheye Cameras

no code implementations26 May 2021 Ashok Dahal, Varun Ravi Kumar, Senthil Yogamani, Ciaran Eising

In this work, we propose a system based on the surround-view camera architecture to detect, localize, and automatically align the vehicle with the inductive chargepad.

Semantic Segmentation

Spatio-Contextual Deep Network Based Multimodal Pedestrian Detection For Autonomous Driving

no code implementations26 May 2021 Kinjal Dasgupta, Arindam Das, Sudip Das, Ujjwal Bhattacharya, Senthil Yogamani

Fusion of these two encoded features takes place inside a multimodal feature embedding module (MuFEm) consisting of several groups of a pair of Graph Attention Network and a feature fusion unit.

Autonomous Driving Graph Attention +1

Vision-based Driver Assistance Systems: Survey, Taxonomy and Advances

no code implementations26 Apr 2021 Jonathan Horgan, Ciarán Hughes, John McDonald, Senthil Yogamani

Vision-based driver assistance systems is one of the rapidly growing research areas of ITS, due to various factors such as the increased level of safety requirements in automotive, computational power in embedded systems, and desire to get closer to autonomous driving.

Autonomous Driving

Computer vision in automated parking systems: Design, implementation and challenges

no code implementations26 Apr 2021 Markus Heimberger, Jonathan Horgan, Ciaran Hughes, John McDonald, Senthil Yogamani

In this paper, we discuss the design and implementation of an automated parking system from the perspective of computer vision algorithms.

3D Reconstruction Autonomous Driving +1

VM-MODNet: Vehicle Motion aware Moving Object Detection for Autonomous Driving

no code implementations22 Apr 2021 Hazem Rashed, Ahmad El Sallab, Senthil Yogamani

In this work, we aim to leverage the vehicle motion information and feed it into the model to have an adaptation mechanism based on ego-motion.

Autonomous Driving Motion Compensation +3

Exploring 2D Data Augmentation for 3D Monocular Object Detection

no code implementations21 Apr 2021 Sugirtha T, Sridevi M, Khailash Santhakumar, B Ravi Kiran, Thomas Gauthier, Senthil Yogamani

Extension of these data augmentations for 3D object detection requires adaptation of the 3D geometry of the input scene and synthesis of new viewpoints.

3D Object Detection Data Augmentation +3

BEVDetNet: Bird's Eye View LiDAR Point Cloud based Real-time 3D Object Detection for Autonomous Driving

no code implementations21 Apr 2021 Sambit Mohapatra, Senthil Yogamani, Heinrich Gotzig, Stefan Milz, Patrick Mader

Most of the research is focused on achieving higher accuracy and these models are not optimized for deployment on embedded systems from the perspective of latency and power efficiency.

3D Object Detection Autonomous Driving +2

Near-field Perception for Low-Speed Vehicle Automation using Surround-view Fisheye Cameras

no code implementations31 Mar 2021 Ciaran Eising, Jonathan Horgan, Senthil Yogamani

In this work, we provide a detailed survey of such vision systems, setting up the survey in the context of an architecture that can be decomposed into four modular components namely Recognition, Reconstruction, Relocalization, and Reorganization.

FisheyeSuperPoint: Keypoint Detection and Description Network for Fisheye Images

no code implementations27 Feb 2021 Anna Konrad, Ciarán Eising, Ganesh Sistu, John McDonald, Rudi Villing, Senthil Yogamani

Keypoint detection and description is a commonly used building block in computer vision systems particularly for robotics and autonomous driving.

Autonomous Driving Homography Estimation +1

Beyond Single Stage Encoder-Decoder Networks: Deep Decoders for Semantic Image Segmentation

no code implementations19 Jul 2020 Gabriel L. Oliveira, Senthil Yogamani, Wolfram Burgard, Thomas Brox

In order to further improve the architecture we introduce a weight function which aims to re-balance classes to increase the attention of the networks to under-represented objects.

Decoder Image Segmentation +3

Deep Reinforcement Learning for Autonomous Driving: A Survey

no code implementations2 Feb 2020 B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A. Al Sallab, Senthil Yogamani, Patrick Pérez

With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments.

Autonomous Driving Imitation Learning +3

Trained Trajectory based Automated Parking System using Visual SLAM on Surround View Cameras

no code implementations7 Jan 2020 Nivedita Tripathi, Senthil Yogamani

Existing parking systems build a local map to be able to plan for maneuvering towards a detected slot.

FuseMODNet: Real-Time Camera and LiDAR based Moving Object Detection for robust low-light Autonomous Driving

no code implementations11 Oct 2019 Hazem Rashed, Mohamed Ramzy, Victor Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani

In this work, we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing motion information from both camera and LiDAR sensors.

Autonomous Driving Moving Object Detection +2

RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving

no code implementations1 Jun 2019 Khaled El Madawy, Hazem Rashed, Ahmad El Sallab, Omar Nasr, Hanan Kamel, Senthil Yogamani

Motivated by the fact that semantic segmentation is a mature algorithm on image data, we explore sensor fusion based 3D segmentation.

3D Semantic Segmentation Autonomous Driving +4

MultiNet++: Multi-Stream Feature Aggregation and Geometric Loss Strategy for Multi-Task Learning

no code implementations15 Apr 2019 Sumanth Chennupati, Ganesh Sistu, Senthil Yogamani, Samir A Rawashdeh

In this work, we propose a multi-stream multi-task network to take advantage of using feature representations from preceding frames in a video sequence for joint learning of segmentation, depth, and motion.

Autonomous Driving Multi-Task Learning

Challenges in Designing Datasets and Validation for Autonomous Driving

no code implementations26 Jan 2019 Michal Uricar, David Hurych, Pavel Krizek, Senthil Yogamani

There is a large gap between academic and industrial setting and a substantial way from a research prototype, built on public datasets, to a deployable solution which is a challenging task.

Autonomous Driving

Design of Real-time Semantic Segmentation Decoder for Automated Driving

no code implementations19 Jan 2019 Arindam Das, Saranya Kandan, Senthil Yogamani, Pavel Krizek

Semantic segmentation remains a computationally intensive algorithm for embedded deployment even with the rapid growth of computation power.

Decoder Image Classification +5

AuxNet: Auxiliary tasks enhanced Semantic Segmentation for Automated Driving

no code implementations17 Jan 2019 Sumanth Chennupati, Ganesh Sistu, Senthil Yogamani, Samir Rawashdeh

Decision making in automated driving is highly specific to the environment and thus semantic segmentation plays a key role in recognizing the objects in the environment around the car.

Decision Making Depth Estimation +4

Real-time Joint Object Detection and Semantic Segmentation Network for Automated Driving

no code implementations12 Jan 2019 Ganesh Sistu, Isabelle Leang, Senthil Yogamani

In this paper, we present a joint multi-task network design for learning object detection and semantic segmentation simultaneously.

Decoder Depth Estimation +6

Optical Flow augmented Semantic Segmentation networks for Automated Driving

no code implementations11 Jan 2019 Hazem Rashed, Senthil Yogamani, Ahmad El-Sallab, Pavel Krizek, Mohamed El-Helw

We also make use of the ground truth optical flow in Virtual KITTI to serve as an ideal estimator and a standard Farneback optical flow algorithm to study the effect of noise.

Optical Flow Estimation Semantic Segmentation

Exploring Deep Spiking Neural Networks for Automated Driving Applications

no code implementations11 Jan 2019 Sambit Mohapatra, Heinrich Gotzig, Senthil Yogamani, Stefan Milz, Raoul Zollner

Neural networks have become the standard model for various computer vision tasks in automated driving including semantic segmentation, moving object detection, depth estimation, visual odometry, etc.

Depth Estimation Moving Object Detection +3

Multi-stream CNN based Video Semantic Segmentation for Automated Driving

no code implementations8 Jan 2019 Ganesh Sistu, Sumanth Chennupati, Senthil Yogamani

We propose two simple high-level architectures based on Recurrent FCN (RFCN) and Multi-Stream FCN (MSFCN) networks.

Decoder Semantic Segmentation +1

RTSeg: Real-time Semantic Segmentation Comparative Study

2 code implementations7 Mar 2018 Mennatullah Siam, Mostafa Gamal, Moemen Abdel-Razek, Senthil Yogamani, Martin Jagersand

In this paper, we address this gap by presenting a real-time semantic segmentation benchmarking framework with a decoupled design for feature extraction and decoding methods.

Autonomous Driving Benchmarking +2

Rejection-Cascade of Gaussians: Real-time adaptive background subtraction framework

no code implementations25 May 2017 B Ravi Kiran, Arindam Das, Senthil Yogamani

We achieve a good improvement in speed without compromising the accuracy with respect to the baseline GMM model.

Deep Reinforcement Learning framework for Autonomous Driving

1 code implementation8 Apr 2017 Ahmad El Sallab, Mohammed Abdou, Etienne Perot, Senthil Yogamani

This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks.

Atari Games Autonomous Driving +3

End-to-End Deep Reinforcement Learning for Lane Keeping Assist

no code implementations13 Dec 2016 Ahmad El Sallab, Mohammed Abdou, Etienne Perot, Senthil Yogamani

This is of particular interest as it is difficult to pose autonomous driving as a supervised learning problem as it has a strong interaction with the environment including other vehicles, pedestrians and roadworks.

Autonomous Driving reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.