Search Results for author: Akira Taniguchi

Found 20 papers, 5 papers with code

Brain-inspired probabilistic generative model for double articulation analysis of spoken language

no code implementations6 Jul 2022 Akira Taniguchi, Maoko Muro, Hiroshi Yamakawa, Tadahiro Taniguchi

This study proposes a PGM for a DAA hypothesis that can be realized in the brain based on the outcomes of several neuroscientific surveys.

Anatomy

Symbol Emergence as Inter-personal Categorization with Head-to-head Latent Word

no code implementations24 May 2022 Kazuma Furukawa, Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi

On the basis of the H2H-type Inter-MDM, we propose a naming game in the same way as the conventional Inter-MDM.

Emergent Communication through Metropolis-Hastings Naming Game with Deep Generative Models

no code implementations24 May 2022 Tadahiro Taniguchi, Yuto Yoshida, Akira Taniguchi, Yoshinobu Hagiwara

The MH naming game is a sort of MH algorithm for an integrative probabilistic generative model that combines two agents playing the naming game.

Bayesian Inference Representation Learning

Spatial Concept-based Topometric Semantic Mapping for Hierarchical Path-planning from Speech Instructions

1 code implementation21 Mar 2022 Akira Taniguchi, Shuya Ito, Tadahiro Taniguchi

Navigating to destinations using human speech instructions is an important task for autonomous mobile robots that operate in the real world.

Unsupervised Multimodal Word Discovery based on Double Articulation Analysis with Co-occurrence cues

1 code implementation18 Jan 2022 Akira Taniguchi, Hiroaki Murakami, Ryo Ozaki, Tadahiro Taniguchi

Human infants acquire their verbal lexicon from minimal prior knowledge of language based on the statistical properties of phonological distributions and the co-occurrence of other sensory stimuli.

Multiagent Multimodal Categorization for Symbol Emergence: Emergent Communication via Interpersonal Cross-modal Inference

no code implementations15 Sep 2021 Yoshinobu Hagiwara, Kazuma Furukawa, Akira Taniguchi, Tadahiro Taniguchi

(2) Function to improve the categorization accuracy in an agent via semiotic communication with another agent, even when some sensory modalities of each agent are missing.

StarGAN-VC+ASR: StarGAN-based Non-Parallel Voice Conversion Regularized by Automatic Speech Recognition

no code implementations10 Aug 2021 Shoki Sakamoto, Akira Taniguchi, Tadahiro Taniguchi, Hirokazu Kameoka

Although this method is powerful, it can fail to preserve the linguistic content of input speech when the number of available training samples is extremely small.

Automatic Speech Recognition speech-recognition +1

Unsupervised Lexical Acquisition of Relative Spatial Concepts Using Spoken User Utterances

no code implementations16 Jun 2021 Rikunari Sagara, Ryo Taguchi, Akira Taniguchi, Tadahiro Taniguchi, Koosuke Hattori, Masahiro Hoguro, Taizo Umezaki

The experimental results show that relative spatial concepts and a phoneme sequence representing each concept can be learned under the condition that the robot does not know which located object is the reference object.

A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive Architectures for Developmental Robots

no code implementations15 Mar 2021 Tadahiro Taniguchi, Hiroshi Yamakawa, Takayuki Nagai, Kenji Doya, Masamichi Sakagami, Masahiro Suzuki, Tomoaki Nakamura, Akira Taniguchi

This approach is based on two ideas: (1) brain-inspired AI, learning human brain architecture to build human-level intelligence, and (2) a probabilistic generative model(PGM)-based cognitive system to develop a cognitive system for developmental robots by integrating PGMs.

Hippocampal formation-inspired probabilistic generative model

no code implementations12 Mar 2021 Akira Taniguchi, Ayako Fukawa, Hiroshi Yamakawa

In building artificial intelligence (AI) agents, referring to how brains function in real environments can accelerate development by reducing the design space.

Hippocampus Simultaneous Localization and Mapping

Hierarchical Bayesian Model for the Transfer of Knowledge on Spatial Concepts based on Multimodal Information

no code implementations11 Mar 2021 Yoshinobu Hagiwara, Keishiro Taguchi, Satoshi Ishibushi, Akira Taniguchi, Tadahiro Taniguchi

This paper proposes a hierarchical Bayesian model based on spatial concepts that enables a robot to transfer the knowledge of places from experienced environments to a new environment.

Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model

1 code implementation18 Feb 2020 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

The aim of this study is to enable a mobile robot to perform navigational tasks with human speech instructions, such as `Go to the kitchen', via probabilistic inference on a Bayesian generative model using spatial concepts.

Decision Making

Autonomous Planning Based on Spatial Concepts to Tidy Up Home Environments with Service Robots

no code implementations10 Feb 2020 Akira Taniguchi, Shota Isobe, Lotfi El Hafi, Yoshinobu Hagiwara, Tadahiro Taniguchi

We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.

Symbol Emergence as an Interpersonal Multimodal Categorization

no code implementations31 May 2019 Yoshinobu Hagiwara, Hiroyoshi Kobayashi, Akira Taniguchi, Tadahiro Taniguchi

In this paper, we describe a new computational model that represents symbol emergence in a two-agent system based on a probabilistic generative model for multimodal categorization.

Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

3 code implementations9 Mar 2018 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

We propose a novel online learning algorithm, called SpCoSLAM 2. 0, for spatial concepts and lexical acquisition with high accuracy and scalability.

online learning

Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

no code implementations3 Feb 2016 Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura

In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals.

Cannot find the paper you are looking for? You can Submit a new open access paper.