Search Results for author: Tetsunari Inamura

Found 5 papers, 3 papers with code

Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model

1 code implementation18 Feb 2020 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

The aim of this study is to enable a mobile robot to perform navigational tasks with human speech instructions, such as `Go to the kitchen', via probabilistic inference on a Bayesian generative model using spatial concepts.

Decision Making

Learning multimodal representations for sample-efficient recognition of human actions

no code implementations6 Mar 2019 Miguel Vasco, Francisco S. Melo, David Martins de Matos, Ana Paiva, Tetsunari Inamura

In this work we present \textit{motion concepts}, a novel multimodal representation of human actions in a household environment.

Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

3 code implementations9 Mar 2018 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

We propose a novel online learning algorithm, called SpCoSLAM 2. 0, for spatial concepts and lexical acquisition with high accuracy and scalability.

online learning

Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

no code implementations3 Feb 2016 Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura

In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals.

Cannot find the paper you are looking for? You can Submit a new open access paper.