The proposed extension makes SimSiam uncertainty-aware by considering SimSiam as a generative model of augmented views and learning it in terms of variational inference.
In this paper, we propose Multi-View Dreaming, a novel reinforcement learning agent for integrated recognition and control from multi-view observations by extending Dreaming.
The present paper proposes a novel reinforcement learning method with world models, DreamingV2, a collaborative extension of DreamerV2 and Dreaming.
In the present paper, we propose a decoder-free extension of Dreamer, a leading model-based reinforcement learning (MBRL) method from pixels.
In this paper, we extend VI-MPC and PaETS, which have been originally introduced in previous literature, to address partially observable cases.
We experimentally evaluated the model predictive control performance via imitation learning for continuous control of sparse reward tasks in simulators and compared it with the performance of the existing SRL method.
An important component of SMC, i. e., a proposal distribution, is designed as a probabilistic neural pose predictor, which can propose diverse and plausible hypotheses by incorporating epistemic uncertainty and heteroscedastic aleatoric uncertainty.
Probabilistic ensembles with trajectory sampling (PETS) is a leading type of MBRL, which employs Bayesian inference to dynamics modeling and model predictive control (MPC) with stochastic optimization via the cross entropy method (CEM).
We also show that PI-Net is able to learn dynamics and cost models latent in the demonstrations.