Search Results for author: Yoshinobu Hagiwara

Found 14 papers, 4 papers with code

Real-world Instance-specific Image Goal Navigation for Service Robots: Bridging the Domain Gap with Contrastive Learning

no code implementations15 Apr 2024 Taichi Sakaguchi, Akira Taniguchi, Yoshinobu Hagiwara, Lotfi El Hafi, Shoichi Hasegawa, Tadahiro Taniguchi

To address this, we propose a novel method called Few-shot Cross-quality Instance-aware Adaptation (CrossIA), which employs contrastive learning with an instance classifier to align features between massive low- and few high-quality images.

Symbol emergence as interpersonal cross-situational learning: the emergence of lexical knowledge with combinatoriality

no code implementations27 Jun 2023 Yoshinobu Hagiwara, Kazuma Furukawa, Takafumi Horie, Akira Taniguchi, Tadahiro Taniguchi

We present a computational model for a symbol emergence system that enables the emergence of lexical knowledge with combinatoriality among agents through a Metropolis-Hastings naming game and cross-situational learning.

Recursive Metropolis-Hastings Naming Game: Symbol Emergence in a Multi-agent System based on Probabilistic Generative Models

no code implementations31 May 2023 Jun Inukai, Tadahiro Taniguchi, Akira Taniguchi, Yoshinobu Hagiwara

The main contributions of this paper are twofold: (1) we propose the recursive Metropolis-Hastings naming game (RMHNG) as an N-agent version of MHNG and demonstrate that RMHNG is an approximate Bayesian inference method for the posterior distribution over a latent variable shared by agents, similar to MHNG; and (2) we empirically evaluate the performance of RMHNG on synthetic and real image data, enabling multiple agents to develop and share a symbol system.

Bayesian Inference

Active Exploration based on Information Gain by Particle Filter for Efficient Spatial Concept Formation

no code implementations20 Nov 2022 Akira Taniguchi, Yoshiki Tabuchi, Tomochika Ishikawa, Lotfi El Hafi, Yoshinobu Hagiwara, Tadahiro Taniguchi

This study provides insights into the technical aspects of the proposed method, including active perception and exploration by the robot, and how the method can enable mobile robots to learn spatial concepts through active exploration.

Bayesian Inference Efficient Exploration +2

Multiagent Multimodal Categorization for Symbol Emergence: Emergent Communication via Interpersonal Cross-modal Inference

no code implementations15 Sep 2021 Yoshinobu Hagiwara, Kazuma Furukawa, Akira Taniguchi, Tadahiro Taniguchi

(2) Function to improve the categorization accuracy in an agent via semiotic communication with another agent, even when some sensory modalities of each agent are missing.

Hierarchical Bayesian Model for the Transfer of Knowledge on Spatial Concepts based on Multimodal Information

no code implementations11 Mar 2021 Yoshinobu Hagiwara, Keishiro Taguchi, Satoshi Ishibushi, Akira Taniguchi, Tadahiro Taniguchi

This paper proposes a hierarchical Bayesian model based on spatial concepts that enables a robot to transfer the knowledge of places from experienced environments to a new environment.

Spatial Concept-Based Navigation with Human Speech Instructions via Probabilistic Inference on Bayesian Generative Model

1 code implementation18 Feb 2020 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

The aim of this study is to enable a mobile robot to perform navigational tasks with human speech instructions, such as `Go to the kitchen', via probabilistic inference on a Bayesian generative model using spatial concepts.

Decision Making

Autonomous Planning Based on Spatial Concepts to Tidy Up Home Environments with Service Robots

no code implementations10 Feb 2020 Akira Taniguchi, Shota Isobe, Lotfi El Hafi, Yoshinobu Hagiwara, Tadahiro Taniguchi

We evaluate the effectiveness of the proposed method by an experimental simulation that reproduces the conditions of the Tidy Up Here task of the World Robot Summit 2018 international robotics competition.

Symbol Emergence as an Interpersonal Multimodal Categorization

no code implementations31 May 2019 Yoshinobu Hagiwara, Hiroyoshi Kobayashi, Akira Taniguchi, Tadahiro Taniguchi

In this paper, we describe a new computational model that represents symbol emergence in a two-agent system based on a probabilistic generative model for multimodal categorization.

Improved and Scalable Online Learning of Spatial Concepts and Language Models with Mapping

3 code implementations9 Mar 2018 Akira Taniguchi, Yoshinobu Hagiwara, Tadahiro Taniguchi, Tetsunari Inamura

We propose a novel online learning algorithm, called SpCoSLAM 2. 0, for spatial concepts and lexical acquisition with high accuracy and scalability.

Cannot find the paper you are looking for? You can Submit a new open access paper.