no code implementations • 18 Oct 2023 • Rohit Mohan, Kiran Kumaraswamy, Juana Valeria Hurtado, Kürsat Petek, Abhinav Valada
Deep learning has led to remarkable strides in scene understanding with panoptic segmentation emerging as a key holistic scene interpretation task.
no code implementations • 7 Jul 2022 • Laura Londoño, Juana Valeria Hurtado, Nora Hertz, Philipp Kellmeyer, Silja Voeneky, Abhinav Valada
In this work, we present the first survey on fairness in robot learning from an interdisciplinary perspective spanning technical, ethical, and legal challenges.
1 code implementation • 8 Sep 2021 • Whye Kit Fong, Rohit Mohan, Juana Valeria Hurtado, Lubing Zhou, Holger Caesar, Oscar Beijbom, Abhinav Valada
Panoptic scene understanding and tracking of dynamic agents are essential for robots and automated vehicles to navigate in urban environments.
Ranked #1 on Panoptic Segmentation on Panoptic nuScenes test
no code implementations • CVPR 2021 • Francisco Rivera Valverde, Juana Valeria Hurtado, Abhinav Valada
In this work, we present the novel self-supervised MM-DistillNet framework consisting of multiple teachers that leverage diverse modalities including RGB, depth and thermal images, to simultaneously exploit complementary cues and distill knowledge into a single audio student network.
no code implementations • 7 Jan 2021 • Juana Valeria Hurtado, Laura Londoño, Abhinav Valada
The exponentially increasing advances in robotics and machine learning are facilitating the transition of robots from being confined to controlled industrial spaces to performing novel everyday tasks in domestic and urban environments.
no code implementations • 17 Apr 2020 • Juana Valeria Hurtado, Rohit Mohan, Wolfram Burgard, Abhinav Valada
In this paper, we introduce a novel perception task denoted as multi-object panoptic tracking (MOPT), which unifies the conventionally disjoint tasks of semantic segmentation, instance segmentation, and multi-object tracking.