Search Results for author: Tadahiro Taniguchi

Found 37 papers, 9 papers with code

Brain-inspired probabilistic generative model for double articulation analysis of spoken language

no code implementations6 Jul 2022 Akira Taniguchi, Maoko Muro, Hiroshi Yamakawa, Tadahiro Taniguchi

This study proposes a PGM for a DAA hypothesis that can be realized in the brain based on the outcomes of several neuroscientific surveys.

Anatomy

Speak Like a Dog: Human to Non-human creature Voice Conversion

1 code implementation9 Jun 2022 Kohei Suzuki, Shoki Sakamoto, Tadahiro Taniguchi, Hirokazu Kameoka

This paper proposes a new voice conversion (VC) task from human speech to dog-like speech while preserving linguistic information as an example of human to non-human creature voice conversion (H2NH-VC) tasks.

Voice Conversion

Symbol Emergence as Inter-personal Categorization with Head-to-head Latent Word

no code implementations24 May 2022 Kazuma Furukawa, Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi

On the basis of the H2H-type Inter-MDM, we propose a naming game in the same way as the conventional Inter-MDM.

Emergent Communication through Metropolis-Hastings Naming Game with Deep Generative Models

no code implementations24 May 2022 Tadahiro Taniguchi, Yuto Yoshida, Akira Taniguchi, Yoshinobu Hagiwara

The MH naming game is a sort of MH algorithm for an integrative probabilistic generative model that combines two agents playing the naming game.

Bayesian Inference Representation Learning

Self-Supervised Representation Learning as Multimodal Variational Inference

no code implementations22 Mar 2022 Hiroki Nakamura, Masashi Okada, Tadahiro Taniguchi

The proposed extension makes SimSiam uncertainty-aware by considering SimSiam as a generative model of augmented views and learning it in terms of variational inference.

Representation Learning Self-Supervised Learning +1

Spatial Concept-based Topometric Semantic Mapping for Hierarchical Path-planning from Speech Instructions

1 code implementation21 Mar 2022 Akira Taniguchi, Shuya Ito, Tadahiro Taniguchi

Navigating to destinations using human speech instructions is an important task for autonomous mobile robots that operate in the real world.

Multi-View Dreaming: Multi-View World Model with Contrastive Learning

no code implementations15 Mar 2022 Akira Kinose, Masashi Okada, Ryo Okumura, Tadahiro Taniguchi

In this paper, we propose Multi-View Dreaming, a novel reinforcement learning agent for integrated recognition and control from multi-view observations by extending Dreaming.

Contrastive Learning reinforcement-learning

Tactile-Sensitive NewtonianVAE for High-Accuracy Industrial Connector Insertion

no code implementations10 Mar 2022 Ryo Okumura, Nobuki Nishio, Tadahiro Taniguchi

An industrial connector insertion task requires submillimeter positioning and grasp pose compensation for a plug.

DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction

no code implementations1 Mar 2022 Masashi Okada, Tadahiro Taniguchi

The present paper proposes a novel reinforcement learning method with world models, DreamingV2, a collaborative extension of DreamerV2 and Dreaming.

Contrastive Learning Model-based Reinforcement Learning +1

Unsupervised Multimodal Word Discovery based on Double Articulation Analysis with Co-occurrence cues

1 code implementation18 Jan 2022 Akira Taniguchi, Hiroaki Murakami, Ryo Ozaki, Tadahiro Taniguchi

Human infants acquire their verbal lexicon from minimal prior knowledge of language based on the statistical properties of phonological distributions and the co-occurrence of other sensory stimuli.

Multiagent Multimodal Categorization for Symbol Emergence: Emergent Communication via Interpersonal Cross-modal Inference

no code implementations15 Sep 2021 Yoshinobu Hagiwara, Kazuma Furukawa, Akira Taniguchi, Tadahiro Taniguchi

(2) Function to improve the categorization accuracy in an agent via semiotic communication with another agent, even when some sensory modalities of each agent are missing.

StarGAN-VC+ASR: StarGAN-based Non-Parallel Voice Conversion Regularized by Automatic Speech Recognition

no code implementations10 Aug 2021 Shoki Sakamoto, Akira Taniguchi, Tadahiro Taniguchi, Hirokazu Kameoka

Although this method is powerful, it can fail to preserve the linguistic content of input speech when the number of available training samples is extremely small.

Automatic Speech Recognition speech-recognition +1

Unsupervised Lexical Acquisition of Relative Spatial Concepts Using Spoken User Utterances

no code implementations16 Jun 2021 Rikunari Sagara, Ryo Taguchi, Akira Taniguchi, Tadahiro Taniguchi, Koosuke Hattori, Masahiro Hoguro, Taizo Umezaki

The experimental results show that relative spatial concepts and a phoneme sequence representing each concept can be learned under the condition that the robot does not know which located object is the reference object.

StarGAN-based Emotional Voice Conversion for Japanese Phrases

no code implementations5 Apr 2021 Asuka Moritani, Ryo Ozaki, Shoki Sakamoto, Hirokazu Kameoka, Tadahiro Taniguchi

Through subjective evaluation experiments, we evaluated the performance of our StarGAN-EVC system in terms of its ability to achieve EVC for Japanese phrases.

Voice Conversion

Double Articulation Analyzer with Prosody for Unsupervised Word and Phoneme Discovery

1 code implementation15 Mar 2021 Yasuaki Okuda, Ryo Ozaki, Tadahiro Taniguchi

The main contributions of this study are as follows: 1) We develop a probabilistic generative model for time series data including prosody that potentially has a double articulation structure; 2) We propose the Prosodic DAA by deriving the inference procedure for Prosodic HDP-HLM and show that Prosodic DAA can discover words directly from continuous human speech signals using statistical information and prosodic information in an unsupervised manner; 3) We show that prosodic cues contribute to word segmentation more in naturally distributed case words, i. e., they follow Zipf's law.

Language Modelling Time Series

A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive Architectures for Developmental Robots

no code implementations15 Mar 2021 Tadahiro Taniguchi, Hiroshi Yamakawa, Takayuki Nagai, Kenji Doya, Masamichi Sakagami, Masahiro Suzuki, Tomoaki Nakamura, Akira Taniguchi

This approach is based on two ideas: (1) brain-inspired AI, learning human brain architecture to build human-level intelligence, and (2) a probabilistic generative model(PGM)-based cognitive system to develop a cognitive system for developmental robots by integrating PGMs.

Hierarchical Bayesian Model for the Transfer of Knowledge on Spatial Concepts based on Multimodal Information

no code implementations11 Mar 2021 Yoshinobu Hagiwara, Keishiro Taguchi, Satoshi Ishibushi, Akira Taniguchi, Tadahiro Taniguchi

This paper proposes a hierarchical Bayesian model based on spatial concepts that enables a robot to transfer the knowledge of places from experienced environments to a new environment.

Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction

no code implementations29 Jul 2020 Masashi Okada, Tadahiro Taniguchi

In the present paper, we propose a decoder-free extension of Dreamer, a leading model-based reinforcement learning (MBRL) method from pixels.

Contrastive Learning Data Augmentation +2

PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference

no code implementations1 Mar 2020 Masashi Okada, Norio Kosaka, Tadahiro Taniguchi

In this paper, we extend VI-MPC and PaETS, which have been originally introduced in previous literature, to address partially observable cases.

Bayesian Inference Continuous Control +2

Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model

1 code implementation18 Feb 2020 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

The aim of this study is to enable a mobile robot to perform navigational tasks with human speech instructions, such as `Go to the kitchen', via probabilistic inference on a Bayesian generative model using spatial concepts.

Decision Making

Autonomous Planning Based on Spatial Concepts to Tidy Up Home Environments with Service Robots

no code implementations10 Feb 2020 Akira Taniguchi, Shota Isobe, Lotfi El Hafi, Yoshinobu Hagiwara, Tadahiro Taniguchi

We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.

Domain-Adversarial and Conditional State Space Model for Imitation Learning

no code implementations31 Jan 2020 Ryo Okumura, Masashi Okada, Tadahiro Taniguchi

We experimentally evaluated the model predictive control performance via imitation learning for continuous control of sparse reward tasks in simulators and compared it with the performance of the existing SRL method.

Continuous Control Imitation Learning +1

Multi-person Pose Tracking using Sequential Monte Carlo with Probabilistic Neural Pose Predictor

no code implementations16 Sep 2019 Masashi Okada, Shinji Takenaka, Tadahiro Taniguchi

An important component of SMC, i. e., a proposal distribution, is designed as a probabilistic neural pose predictor, which can propose diverse and plausible hypotheses by incorporating epistemic uncertainty and heteroscedastic aleatoric uncertainty.

Pose Tracking

Variational Inference MPC for Bayesian Model-based Reinforcement Learning

no code implementations8 Jul 2019 Masashi Okada, Tadahiro Taniguchi

Probabilistic ensembles with trajectory sampling (PETS) is a leading type of MBRL, which employs Bayesian inference to dynamics modeling and model predictive control (MPC) with stochastic optimization via the cross entropy method (CEM).

Bayesian Inference Model-based Reinforcement Learning +3

Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Graphical Model

no code implementations3 Jul 2019 Akira Kinose, Tadahiro Taniguchi

In this paper, we present a new theory for integrating reinforcement and imitation learning by extending the probabilistic generative model framework for reinforcement learning, {\it plan by inference}.

General Knowledge Imitation Learning +1

Symbol Emergence as an Interpersonal Multimodal Categorization

no code implementations31 May 2019 Yoshinobu Hagiwara, Hiroyoshi Kobayashi, Akira Taniguchi, Tadahiro Taniguchi

In this paper, we describe a new computational model that represents symbol emergence in a two-agent system based on a probabilistic generative model for multimodal categorization.

Towards Understanding Language through Perception in Situated Human-Robot Interaction: From Word Grounding to Grammar Induction

no code implementations12 Dec 2018 Amir Aly, Tadahiro Taniguchi

Robots are widely collaborating with human users in diferent tasks that require high-level cognitive functions to make them able to discover the surrounding environment.

Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

3 code implementations9 Mar 2018 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

We propose a novel online learning algorithm, called SpCoSLAM 2. 0, for spatial concepts and lexical acquisition with high accuracy and scalability.

online learning

Symbol Emergence in Cognitive Developmental Systems: a Survey

no code implementations26 Jan 2018 Tadahiro Taniguchi, Emre Ugur, Matej Hoffmann, Lorenzo Jamone, Takayuki Nagai, Benjamin Rosman, Toshihiko Matsuka, Naoto Iwahashi, Erhan Oztop, Justus Piater, Florentin Wörgötter

However, the symbol grounding problem was originally posed to connect symbolic AI and sensorimotor information and did not consider many interdisciplinary phenomena in human communication and dynamic symbol systems in our society, which semiotics considered.

SERKET: An Architecture for Connecting Stochastic Models to Realize a Large-Scale Cognitive Model

1 code implementation4 Dec 2017 Tomoaki Nakamura, Takayuki Nagai, Tadahiro Taniguchi

Experimental results demonstrated that the model can be constructed by connecting modules, the parameters can be optimized as a whole, and they are comparable with the original models that we have proposed.

Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

no code implementations3 Feb 2016 Akira Taniguchi, Tadahiro Taniguchi, Tetsunari Inamura

In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals.

Multimodal Hierarchical Dirichlet Process-based Active Perception

1 code implementation1 Oct 2015 Tadahiro Taniguchi, Toshiaki Takano, Ryo Yoshino

We propose an MHDP-based active perception method that uses the information gain (IG) maximization criterion and lazy greedy algorithm.

Symbol Emergence in Robotics: A Survey

no code implementations29 Sep 2015 Tadahiro Taniguchi, Takayuki Nagai, Tomoaki Nakamura, Naoto Iwahashi, Tetsuya OGATA, Hideki Asoh

Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people.

Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals

no code implementations22 Jun 2015 Tadahiro Taniguchi, Ryo Nakashima, Shogo Nagasaka

In this paper, we develop a novel machine learning method called nonparametric Bayesian double articulation analyzer (NPB-DAA) that can directly acquire language and acoustic models from observed continuous speech signals.

Automatic Speech Recognition Language Acquisition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.