Search Results for author: Masashi Okada

Found 10 papers, 0 papers with code

Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning

no code implementations8 Sep 2023 Hiroki Nakamura, Masashi Okada, Tadahiro Taniguchi

Moreover, experiments on image retrieval using MNIST and PascalVOC showed that the representations of our method can be operated by OR and AND operations.

Image Classification Image Generation +5

Representation Uncertainty in Self-Supervised Learning as Variational Inference

no code implementations ICCV 2023 Hiroki Nakamura, Masashi Okada, Tadahiro Taniguchi

In this study, a novel self-supervised learning (SSL) method is proposed, which considers SSL in terms of variational inference to learn not only representation but also representation uncertainties.

Representation Learning Self-Supervised Learning +1

Multi-View Dreaming: Multi-View World Model with Contrastive Learning

no code implementations15 Mar 2022 Akira Kinose, Masashi Okada, Ryo Okumura, Tadahiro Taniguchi

In this paper, we propose Multi-View Dreaming, a novel reinforcement learning agent for integrated recognition and control from multi-view observations by extending Dreaming.

Contrastive Learning reinforcement-learning +1

DreamingV2: Reinforcement Learning with Discrete World Models without Reconstruction

no code implementations1 Mar 2022 Masashi Okada, Tadahiro Taniguchi

The present paper proposes a novel reinforcement learning method with world models, DreamingV2, a collaborative extension of DreamerV2 and Dreaming.

Contrastive Learning Model-based Reinforcement Learning +2

Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction

no code implementations29 Jul 2020 Masashi Okada, Tadahiro Taniguchi

In the present paper, we propose a decoder-free extension of Dreamer, a leading model-based reinforcement learning (MBRL) method from pixels.

Contrastive Learning Data Augmentation +3

PlaNet of the Bayesians: Reconsidering and Improving Deep Planning Network by Incorporating Bayesian Inference

no code implementations1 Mar 2020 Masashi Okada, Norio Kosaka, Tadahiro Taniguchi

In this paper, we extend VI-MPC and PaETS, which have been originally introduced in previous literature, to address partially observable cases.

Bayesian Inference Continuous Control +3

Domain-Adversarial and Conditional State Space Model for Imitation Learning

no code implementations31 Jan 2020 Ryo Okumura, Masashi Okada, Tadahiro Taniguchi

We experimentally evaluated the model predictive control performance via imitation learning for continuous control of sparse reward tasks in simulators and compared it with the performance of the existing SRL method.

Continuous Control Imitation Learning +2

Multi-person Pose Tracking using Sequential Monte Carlo with Probabilistic Neural Pose Predictor

no code implementations16 Sep 2019 Masashi Okada, Shinji Takenaka, Tadahiro Taniguchi

An important component of SMC, i. e., a proposal distribution, is designed as a probabilistic neural pose predictor, which can propose diverse and plausible hypotheses by incorporating epistemic uncertainty and heteroscedastic aleatoric uncertainty.

Pose Tracking

Variational Inference MPC for Bayesian Model-based Reinforcement Learning

no code implementations8 Jul 2019 Masashi Okada, Tadahiro Taniguchi

Probabilistic ensembles with trajectory sampling (PETS) is a leading type of MBRL, which employs Bayesian inference to dynamics modeling and model predictive control (MPC) with stochastic optimization via the cross entropy method (CEM).

Bayesian Inference Model-based Reinforcement Learning +5

Cannot find the paper you are looking for? You can Submit a new open access paper.