Search Results for author: Abhinav Valada

Found 54 papers, 15 papers with code

Panoptic Out-of-Distribution Segmentation

no code implementations18 Oct 2023 Rohit Mohan, Kiran Kumaraswamy, Juana Valeria Hurtado, Kürsat Petek, Abhinav Valada

Deep learning has led to remarkable strides in scene understanding with panoptic segmentation emerging as a key holistic scene interpretation task.

Data Augmentation Instance Segmentation +3

Compositional Servoing by Recombining Demonstrations

no code implementations6 Oct 2023 Max Argus, Abhijeet Nayak, Martin Büchner, Silvio Galesso, Abhinav Valada, Thomas Brox

In this work, we present a framework that formulates the visual servoing task as graph traversal.

Few-Shot Panoptic Segmentation With Foundation Models

1 code implementation19 Sep 2023 Markus Käppeler, Kürsat Petek, Niclas Vödisch, Wolfram Burgard, Abhinav Valada

Concurrently, recent breakthroughs in visual representation learning have sparked a paradigm shift leading to the advent of large foundation models that can be trained with completely unlabeled images.

Panoptic Segmentation Representation Learning +1

RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps

no code implementations18 Sep 2023 Abhijeet Nayak, Daniele Cattaneo, Abhinav Valada

RaLF is composed of radar and LiDAR feature encoders, a place recognition head that generates global descriptors, and a metric localization head that predicts the 3-DoF transformation between the radar scan and the map.

Metric Learning

Multi-camera Bird's Eye View Perception for Autonomous Driving

no code implementations16 Sep 2023 David Unger, Nikhil Gosala, Varun Ravi Kumar, Shubhankar Borse, Abhinav Valada, Senthil Yogamani

Surround vision systems that are pretty common in new vehicles use the IPM principle to generate a BEV image and to show it on display to the driver.

Autonomous Driving Sensor Fusion

AmodalSynthDrive: A Synthetic Amodal Perception Dataset for Autonomous Driving

no code implementations12 Sep 2023 Ahmed Rida Sekkat, Rohit Mohan, Oliver Sawade, Elmar Matthes, Abhinav Valada

To address these limitations, we introduce AmodalSynthDrive, a synthetic multi-task multi-modal amodal perception dataset.

Autonomous Driving Benchmarking +2

A Smart Robotic System for Industrial Plant Supervision

no code implementations10 Aug 2023 D. Adriana Gómez-Rosal, Max Bergau, Georg K. J. Fischer, Andreas Wachaja, Johannes Gräter, Matthias Odenweller, Uwe Piechottka, Fabian Hoeflinger, Nikhil Gosala, Niklas Wetzel, Daniel Büscher, Abhinav Valada, Wolfram Burgard

In today's chemical plants, human field operators perform frequent integrity checks to guarantee high safety standards, and thus are possibly the first to encounter dangerous operating conditions.


Learning Hierarchical Interactive Multi-Object Search for Mobile Manipulation

no code implementations12 Jul 2023 Fabian Schmalstieg, Daniel Honerkamp, Tim Welschehold, Abhinav Valada

We present HIMOS, a hierarchical reinforcement learning approach that learns to compose exploration, navigation, and manipulation skills.

Decision Making Hierarchical Reinforcement Learning +1

The Treachery of Images: Bayesian Scene Keypoints for Deep Policy Learning in Robotic Manipulation

1 code implementation8 May 2023 Jan Ole von Hartz, Eugenio Chisari, Tim Welschehold, Wolfram Burgard, Joschka Boedecker, Abhinav Valada

We employ our method to learn challenging multi-object robot manipulation tasks from wrist camera observations and demonstrate superior utility for policy learning compared to other representation learning techniques.

Representation Learning Robot Manipulation

INoD: Injected Noise Discriminator for Self-Supervised Representation Learning in Agricultural Fields

no code implementations31 Mar 2023 Julia Hindel, Nikhil Gosala, Kevin Bregler, Abhinav Valada

Perception datasets for agriculture are limited both in quantity and diversity which hinders effective training of supervised learning approaches.

Instance Segmentation object-detection +4

EvCenterNet: Uncertainty Estimation for Object Detection using Evidential Learning

no code implementations6 Mar 2023 Monish R. Nallapareddy, Kshitij Sirohi, Paulo L. J. Drews-Jr, Wolfram Burgard, Chih-Hong Cheng, Abhinav Valada

In this work, we propose EvCenterNet, a novel uncertainty-aware 2D object detection framework using evidential learning to directly estimate both classification and regression uncertainties.

Decision Making object-detection +2

Learning and Aggregating Lane Graphs for Urban Automated Driving

no code implementations CVPR 2023 Martin Büchner, Jannik Zürn, Ion-George Todoran, Abhinav Valada, Wolfram Burgard

To overcome these challenges, we propose a novel bottom-up approach to lane graph estimation from aerial imagery that aggregates multiple overlapping graphs into a single consistent graph.

SkyEye: Self-Supervised Bird's-Eye-View Semantic Mapping Using Monocular Frontal View Images

no code implementations CVPR 2023 Nikhil Gosala, Kürsat Petek, Paulo L. J. Drews-Jr, Wolfram Burgard, Abhinav Valada

Implicit supervision trains the model by enforcing spatial consistency of the scene over time based on FV semantic sequences, while explicit supervision exploits BEV pseudolabels generated from FV semantic annotations and self-supervised depth estimates.

Decision Making

Fairness and Bias in Robot Learning

no code implementations7 Jul 2022 Laura Londoño, Juana Valeria Hurtado, Nora Hertz, Philipp Kellmeyer, Silja Voeneky, Abhinav Valada

In this work, we present the first survey on fairness in robot learning from an interdisciplinary perspective spanning technical, ethical, and legal challenges.

BIG-bench Machine Learning Fairness

N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation Motions in Unseen and Dynamic Environments

1 code implementation17 Jun 2022 Daniel Honerkamp, Tim Welschehold, Abhinav Valada

Despite its importance in both industrial and service robotics, mobile manipulation remains a significant challenge as it requires a seamless integration of end-effector trajectory generation with navigation skills as well as reasoning over long-horizons.


Perceiving the Invisible: Proposal-Free Amodal Panoptic Segmentation

no code implementations29 May 2022 Rohit Mohan, Abhinav Valada

Amodal panoptic segmentation aims to connect the perception of the world to its cognitive understanding.

Amodal Panoptic Segmentation Panoptic Segmentation

3D Multi-Object Tracking Using Graph Neural Networks with Cross-Edge Modality Attention

no code implementations21 Mar 2022 Martin Buchner, Abhinav Valada

We evaluate our approach using various sensor modalities and model configurations on the challenging nuScenes and KITTI datasets.

3D Multi-Object Tracking

On Hyperbolic Embeddings in 2D Object Detection

no code implementations15 Mar 2022 Christopher Lang, Alexander Braun, Abhinav Valada

Object detection, for the most part, has been formulated in the euclidean space, where euclidean or spherical geodesic distances measure the similarity of an image region to an object class prototype.

Classification object-detection +1

Amodal Panoptic Segmentation

no code implementations CVPR 2022 Rohit Mohan, Abhinav Valada

To enable robots to reason with this capability, we formulate and propose a novel task that we name amodal panoptic segmentation.

Amodal Panoptic Segmentation Instance Segmentation +2

Doing Right by Not Doing Wrong in Human-Robot Collaboration

no code implementations5 Feb 2022 Laura Londoño, Adrian Röfer, Tim Welschehold, Abhinav Valada

As robotic systems become more and more capable of assisting humans in their everyday lives, we must consider the opportunities for these artificial agents to make their human collaborators feel unsafe or to treat them unfairly.

Decision Making Fairness +1

Contrastive Object Detection Using Knowledge Graph Embeddings

no code implementations21 Dec 2021 Christopher Lang, Alexander Braun, Abhinav Valada

Object recognition for the most part has been approached as a one-hot problem that treats classes to be discrete and unrelated.

Knowledge Graph Embeddings Knowledge Graphs +3

7th AI Driving Olympics: 1st Place Report for Panoptic Tracking

no code implementations9 Dec 2021 Rohit Mohan, Abhinav Valada

In this technical report, we describe our EfficientLPT architecture that won the panoptic tracking challenge in the 7th AI Driving Olympics at NeurIPS 2021.

Benchmarking Panoptic Segmentation +1

Robot Skill Adaptation via Soft Actor-Critic Gaussian Mixture Models

no code implementations25 Nov 2021 Iman Nematollahi, Erick Rosete-Beas, Adrian Röfer, Tim Welschehold, Abhinav Valada, Wolfram Burgard

A core challenge for an autonomous agent acting in the real world is to adapt its repertoire of skills to cope with its noisy perception and dynamics.

Unsupervised Domain Adaptation for LiDAR Panoptic Segmentation

no code implementations30 Sep 2021 Borna Bešić, Nikhil Gosala, Daniele Cattaneo, Abhinav Valada

Unsupervised Domain Adaptation (UDA) techniques are thus essential to fill this domain gap and retain the performance of models on new sensor setups without the need for additional data labeling.

Autonomous Driving Navigate +3

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

1 code implementation6 Aug 2021 Nikhil Gosala, Abhinav Valada

Bird's-Eye-View (BEV) maps have emerged as one of the most powerful representations for scene understanding due to their ability to provide rich spatial context while being easy to interpret and process.

Depth Estimation Panoptic Segmentation +3

Multi-Perspective Anomaly Detection

no code implementations20 May 2021 Peter Jakob, Manav Madan, Tobias Schmid-Schirling, Abhinav Valada

Furthermore, we introduce the dices dataset, which consists of over 2000 grayscale images of falling dices from multiple perspectives, with 5\% of the images containing rare anomalies (e. g., drill holes, sawing, or scratches).

Anomaly Detection Denoising

LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM

1 code implementation8 Mar 2021 Daniele Cattaneo, Matteo Vaghi, Abhinav Valada

Loop closure detection is an essential component of Simultaneous Localization and Mapping (SLAM) systems, which reduces the drift accumulated over time.

Autonomous Driving Loop Closure Detection +2

There is More than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking with Sound by Distilling Multimodal Knowledge

no code implementations CVPR 2021 Francisco Rivera Valverde, Juana Valeria Hurtado, Abhinav Valada

In this work, we present the novel self-supervised MM-DistillNet framework consisting of multiple teachers that leverage diverse modalities including RGB, depth and thermal images, to simultaneously exploit complementary cues and distill knowledge into a single audio student network.

object-detection Object Detection

EfficientLPS: Efficient LiDAR Panoptic Segmentation

no code implementations16 Feb 2021 Kshitij Sirohi, Rohit Mohan, Daniel Büscher, Wolfram Burgard, Abhinav Valada

Panoptic segmentation of point clouds is a crucial task that enables autonomous vehicles to comprehend their vicinity using their highly accurate and reliable LiDAR sensors.

Autonomous Vehicles Instance Segmentation +2

From Learning to Relearning: A Framework for Diminishing Bias in Social Robot Navigation

no code implementations7 Jan 2021 Juana Valeria Hurtado, Laura Londoño, Abhinav Valada

The exponentially increasing advances in robotics and machine learning are facilitating the transition of robots from being confined to controlled industrial spaces to performing novel everyday tasks in domestic and urban environments.

Fairness Social Navigation

Robust Vision Challenge 2020 -- 1st Place Report for Panoptic Segmentation

no code implementations23 Aug 2020 Rohit Mohan, Abhinav Valada

In this technical report, we present key details of our winning panoptic segmentation architecture EffPS_b1bs4_RVC.

Benchmarking Panoptic Segmentation +1

Dynamic Object Removal and Spatio-Temporal RGB-D Inpainting via Geometry-Aware Adversarial Learning

1 code implementation12 Aug 2020 Borna Bešić, Abhinav Valada

Dynamic objects have a significant impact on the robot's perception of the environment which degrades the performance of essential tasks such as localization and mapping.

Image-to-Image Translation Retrieval +2

CMRNet++: Map and Camera Agnostic Monocular Visual Localization in LiDAR Maps

2 code implementations20 Apr 2020 Daniele Cattaneo, Domenico Giorgio Sorrenti, Abhinav Valada

In this paper, we now take it a step further by introducing CMRNet++, which is a significantly more robust model that not only generalizes to new places effectively, but is also independent of the camera parameters.

Autonomous Driving Visual Localization

MOPT: Multi-Object Panoptic Tracking

no code implementations17 Apr 2020 Juana Valeria Hurtado, Rohit Mohan, Wolfram Burgard, Abhinav Valada

In this paper, we introduce a novel perception task denoted as multi-object panoptic tracking (MOPT), which unifies the conventionally disjoint tasks of semantic segmentation, instance segmentation, and multi-object tracking.

Instance Segmentation Multi-Object Tracking +3

EfficientPS: Efficient Panoptic Segmentation

2 code implementations5 Apr 2020 Rohit Mohan, Abhinav Valada

Understanding the scene in which an autonomous robot operates is critical for its competent functioning.

Instance Segmentation Panoptic Segmentation +1

Self-Supervised Visual Terrain Classification from Unsupervised Acoustic Feature Learning

no code implementations6 Dec 2019 Jannik Zürn, Wolfram Burgard, Abhinav Valada

In this work, we propose a novel terrain classification framework leveraging an unsupervised proprioceptive classifier that learns from vehicle-terrain interaction sounds to self-supervise an exteroceptive classifier for pixel-wise semantic segmentation of images.

Classification General Classification +2

Vision-Based Autonomous UAV Navigation and Landing for Urban Search and Rescue

no code implementations4 Jun 2019 Mayank Mittal, Rohit Mohan, Wolfram Burgard, Abhinav Valada

This problem is extremely challenging as pre-existing maps cannot be leveraged for navigation due to structural changes that may have occurred.


Vision-based Autonomous Landing in Catastrophe-Struck Environments

no code implementations15 Sep 2018 Mayank Mittal, Abhinav Valada, Wolfram Burgard

However, these UAVs have to be able to autonomously land on debris piles in order to accurately locate the survivors.


Multimodal Interaction-aware Motion Prediction for Autonomous Street Crossing

no code implementations21 Aug 2018 Noha Radwan, Wolfram Burgard, Abhinav Valada

Learned representations from the traffic light recognition stream are fused with the estimated trajectories from the motion prediction stream to learn the crossing decision.

motion prediction

Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

1 code implementation11 Aug 2018 Abhinav Valada, Rohit Mohan, Wolfram Burgard

To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner.

Scene Recognition Semantic Segmentation

VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry

no code implementations23 Apr 2018 Noha Radwan, Abhinav Valada, Wolfram Burgard

Semantic understanding and localization are fundamental enablers of robot autonomy that have for the most part been tackled as disjoint problems.

Outdoor Localization Scene Understanding +1

Deep Spatiotemporal Models for Robust Proprioceptive Terrain Classification

no code implementations2 Apr 2018 Abhinav Valada, Wolfram Burgard

Terrain classification is a critical component of any autonomous mobile robot system operating in unknown real-world environments.

Classification General Classification

Deep Auxiliary Learning for Visual Localization and Odometry

1 code implementation9 Mar 2018 Abhinav Valada, Noha Radwan, Wolfram Burgard

We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation.

Auxiliary Learning Visual Localization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.