Search Results for author: Akira Taniguchi

Found 26 papers, 6 papers with code

Hierarchical Path-planning from Speech Instructions with Spatial Concept-based Topometric Semantic Mapping

1 code implementation21 Mar 2022 Akira Taniguchi, Shuya Ito, Tadahiro Taniguchi

Navigation experiments using speech instruction with a waypoint demonstrated the performance improvement of path planning, WN-SPL by 0. 589, and reduced computation time by 7. 14 sec compared to conventional methods.

Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

3 code implementations9 Mar 2018 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

We propose a novel online learning algorithm, called SpCoSLAM 2. 0, for spatial concepts and lexical acquisition with high accuracy and scalability.

Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

no code implementations3 Feb 2016 Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura

In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals.

Symbol Emergence as an Interpersonal Multimodal Categorization

no code implementations31 May 2019 Yoshinobu Hagiwara, Hiroyoshi Kobayashi, Akira Taniguchi, Tadahiro Taniguchi

In this paper, we describe a new computational model that represents symbol emergence in a two-agent system based on a probabilistic generative model for multimodal categorization.

Autonomous Planning Based on Spatial Concepts to Tidy Up Home Environments with Service Robots

no code implementations10 Feb 2020 Akira Taniguchi, Shota Isobe, Lotfi El Hafi, Yoshinobu Hagiwara, Tadahiro Taniguchi

We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.

Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model

1 code implementation18 Feb 2020 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

The aim of this study is to enable a mobile robot to perform navigational tasks with human speech instructions, such as `Go to the kitchen', via probabilistic inference on a Bayesian generative model using spatial concepts.

Decision Making

Hierarchical Bayesian Model for the Transfer of Knowledge on Spatial Concepts based on Multimodal Information

no code implementations11 Mar 2021 Yoshinobu Hagiwara, Keishiro Taguchi, Satoshi Ishibushi, Akira Taniguchi, Tadahiro Taniguchi

This paper proposes a hierarchical Bayesian model based on spatial concepts that enables a robot to transfer the knowledge of places from experienced environments to a new environment.

Hippocampal formation-inspired probabilistic generative model

no code implementations12 Mar 2021 Akira Taniguchi, Ayako Fukawa, Hiroshi Yamakawa

In building artificial intelligence (AI) agents, referring to how brains function in real environments can accelerate development by reducing the design space.

Hippocampus Simultaneous Localization and Mapping

A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive Architectures for Developmental Robots

no code implementations15 Mar 2021 Tadahiro Taniguchi, Hiroshi Yamakawa, Takayuki Nagai, Kenji Doya, Masamichi Sakagami, Masahiro Suzuki, Tomoaki Nakamura, Akira Taniguchi

This approach is based on two ideas: (1) brain-inspired AI, learning human brain architecture to build human-level intelligence, and (2) a probabilistic generative model(PGM)-based cognitive system to develop a cognitive system for developmental robots by integrating PGMs.

Unsupervised Lexical Acquisition of Relative Spatial Concepts Using Spoken User Utterances

no code implementations16 Jun 2021 Rikunari Sagara, Ryo Taguchi, Akira Taniguchi, Tadahiro Taniguchi, Koosuke Hattori, Masahiro Hoguro, Taizo Umezaki

The experimental results show that relative spatial concepts and a phoneme sequence representing each concept can be learned under the condition that the robot does not know which located object is the reference object.

Object

StarGAN-VC+ASR: StarGAN-based Non-Parallel Voice Conversion Regularized by Automatic Speech Recognition

no code implementations10 Aug 2021 Shoki Sakamoto, Akira Taniguchi, Tadahiro Taniguchi, Hirokazu Kameoka

Although this method is powerful, it can fail to preserve the linguistic content of input speech when the number of available training samples is extremely small.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Multiagent Multimodal Categorization for Symbol Emergence: Emergent Communication via Interpersonal Cross-modal Inference

no code implementations15 Sep 2021 Yoshinobu Hagiwara, Kazuma Furukawa, Akira Taniguchi, Tadahiro Taniguchi

(2) Function to improve the categorization accuracy in an agent via semiotic communication with another agent, even when some sensory modalities of each agent are missing.

Unsupervised Multimodal Word Discovery based on Double Articulation Analysis with Co-occurrence cues

1 code implementation18 Jan 2022 Akira Taniguchi, Hiroaki Murakami, Ryo Ozaki, Tadahiro Taniguchi

The proposed method can acquire words and phonemes from speech signals using unsupervised learning and utilize object information based on multiple modalities-vision, tactile, and auditory-simultaneously.

Brain-inspired probabilistic generative model for double articulation analysis of spoken language

no code implementations6 Jul 2022 Akira Taniguchi, Maoko Muro, Hiroshi Yamakawa, Tadahiro Taniguchi

This study proposes a PGM for a DAA hypothesis that can be realized in the brain based on the outcomes of several neuroscientific surveys.

Anatomy Sentence

Active Exploration based on Information Gain by Particle Filter for Efficient Spatial Concept Formation

no code implementations20 Nov 2022 Akira Taniguchi, Yoshiki Tabuchi, Tomochika Ishikawa, Lotfi El Hafi, Yoshinobu Hagiwara, Tadahiro Taniguchi

This study provides insights into the technical aspects of the proposed method, including active perception and exploration by the robot, and how the method can enable mobile robots to learn spatial concepts through active exploration.

Bayesian Inference Efficient Exploration +2

Recursive Metropolis-Hastings Naming Game: Symbol Emergence in a Multi-agent System based on Probabilistic Generative Models

no code implementations31 May 2023 Jun Inukai, Tadahiro Taniguchi, Akira Taniguchi, Yoshinobu Hagiwara

The main contributions of this paper are twofold: (1) we propose the recursive Metropolis-Hastings naming game (RMHNG) as an N-agent version of MHNG and demonstrate that RMHNG is an approximate Bayesian inference method for the posterior distribution over a latent variable shared by agents, similar to MHNG; and (2) we empirically evaluate the performance of RMHNG on synthetic and real image data, enabling multiple agents to develop and share a symbol system.

Bayesian Inference

Metropolis-Hastings algorithm in joint-attention naming game: Experimental semiotics study

no code implementations31 May 2023 Ryota Okumura, Tadahiro Taniguchi, Yosinobu Hagiwara, Akira Taniguchi

By comparing human acceptance decisions of a partner's naming with acceptance probabilities computed in the MHNG, we tested whether human behavior is consistent with the MHNG theory.

Bayesian Inference

Symbol emergence as interpersonal cross-situational learning: the emergence of lexical knowledge with combinatoriality

no code implementations27 Jun 2023 Yoshinobu Hagiwara, Kazuma Furukawa, Takafumi Horie, Akira Taniguchi, Tadahiro Taniguchi

We present a computational model for a symbol emergence system that enables the emergence of lexical knowledge with combinatoriality among agents through a Metropolis-Hastings naming game and cross-situational learning.

Real-world Instance-specific Image Goal Navigation for Service Robots: Bridging the Domain Gap with Contrastive Learning

no code implementations15 Apr 2024 Taichi Sakaguchi, Akira Taniguchi, Yoshinobu Hagiwara, Lotfi El Hafi, Shoichi Hasegawa, Tadahiro Taniguchi

To address this, we propose a novel method called Few-shot Cross-quality Instance-aware Adaptation (CrossIA), which employs contrastive learning with an instance classifier to align features between massive low- and few high-quality images.

Contrastive Learning Deblurring +2

DEQ-MCL: Discrete-Event Queue-based Monte-Carlo Localization

no code implementations22 Apr 2024 Akira Taniguchi, Ayako Fukawa, Hiroshi Yamakawa

Spatial cognition in hippocampal formation is posited to play a crucial role in the development of self-localization techniques for robots.

Cannot find the paper you are looking for? You can Submit a new open access paper.