Search Results for author: Niko Suenderhauf

Found 9 papers, 6 papers with code

Human-in-the-Loop Segmentation of Multi-species Coral Imagery

1 code implementation15 Apr 2024 Scarlett Raine, Ross Marchant, Brano Kusy, Frederic Maire, Niko Suenderhauf, Tobias Fischer

For extremely sparsely labeled images, we propose a labeling regime based on human-in-the-loop principles, resulting in significant improvement in annotation efficiency: If only 5 point labels per image are available, our proposed human-in-the-loop approach improves on the state-of-the-art by 17. 3% for pixel accuracy and 22. 6% for mIoU; and by 10. 6% and 19. 1% when 10 point labels per image are available.

Semantic Segmentation

SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning

no code implementations12 Jul 2023 Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf

To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures.

Robot Task Planning

The Need for Inherently Privacy-Preserving Vision in Trustworthy Autonomous Systems

no code implementations29 Mar 2023 Adam K. Taras, Niko Suenderhauf, Peter Corke, Donald G. Dansereau

Vision is a popular and effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, gender, identity, activity and even emotional state of humans within that scene.

Privacy Preserving

Critic Guided Segmentation of Rewarding Objects in First-Person Views

1 code implementation20 Jul 2021 Andrew Melnik, Augustin Harter, Christian Limberg, Krishan Rana, Niko Suenderhauf, Helge Ritter

This work discusses a learning approach to mask rewarding objects in images using sparse reward signals from an imitation learning dataset.

Imitation Learning

Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation

1 code implementation20 Feb 2019 Sourav Garg, Madhu Babu V, Thanuja Dharmasiri, Stephen Hausler, Niko Suenderhauf, Swagat Kumar, Tom Drummond, Michael Milford

Visual place recognition (VPR) - the act of recognizing a familiar visual place - becomes difficult when there is extreme environmental appearance change or viewpoint change.

Robotics

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

1 code implementation16 Apr 2018 Sourav Garg, Niko Suenderhauf, Michael Milford

Human visual scene understanding is so remarkable that we are able to recognize a revisited place when entering it from the opposite direction it was first visited, even in the presence of extreme variations in appearance.

Navigate Scene Understanding +2

One-Shot Reinforcement Learning for Robot Navigation with Interactive Replay

1 code implementation28 Nov 2017 Jake Bruce, Niko Suenderhauf, Piotr Mirowski, Raia Hadsell, Michael Milford

Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment.

Navigate reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.