Search Results for author: Niko Suenderhauf

Found 8 papers, 5 papers with code

One-Shot Reinforcement Learning for Robot Navigation with Interactive Replay

1 code implementation28 Nov 2017 Jake Bruce, Niko Suenderhauf, Piotr Mirowski, Raia Hadsell, Michael Milford

Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment.

Navigate reinforcement-learning +2

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

1 code implementation16 Apr 2018 Sourav Garg, Niko Suenderhauf, Michael Milford

Human visual scene understanding is so remarkable that we are able to recognize a revisited place when entering it from the opposite direction it was first visited, even in the presence of extreme variations in appearance.

Navigate Scene Understanding +2

Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation

1 code implementation20 Feb 2019 Sourav Garg, Madhu Babu V, Thanuja Dharmasiri, Stephen Hausler, Niko Suenderhauf, Swagat Kumar, Tom Drummond, Michael Milford

Visual place recognition (VPR) - the act of recognizing a familiar visual place - becomes difficult when there is extreme environmental appearance change or viewpoint change.

Robotics

Critic Guided Segmentation of Rewarding Objects in First-Person Views

1 code implementation20 Jul 2021 Andrew Melnik, Augustin Harter, Christian Limberg, Krishan Rana, Niko Suenderhauf, Helge Ritter

This work discusses a learning approach to mask rewarding objects in images using sparse reward signals from an imitation learning dataset.

Imitation Learning

The Need for Inherently Privacy-Preserving Vision in Trustworthy Autonomous Systems

no code implementations29 Mar 2023 Adam K. Taras, Niko Suenderhauf, Peter Corke, Donald G. Dansereau

Vision is a popular and effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, gender, identity, activity and even emotional state of humans within that scene.

Privacy Preserving

SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning

no code implementations12 Jul 2023 Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf

To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures.

Robot Task Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.