Efficient Stochastic Optimal Control through Approximate Bayesian Input Inference

17 May 2021  ·  Joe Watson, Hany Abdulsamad, Rolf Findeisen, Jan Peters ·

Optimal control under uncertainty is a prevailing challenge for many reasons. One of the critical difficulties lies in producing tractable solutions for the underlying stochastic optimization problem. We show how advanced approximate inference techniques can be used to handle the statistical approximations principled and practically by framing the control problem as a problem of input estimation. Analyzing the Gaussian setting, we present an inference-based solver that is effective in stochastic and deterministic settings and was found to be superior to popular baselines on nonlinear simulated tasks. We draw connections that relate this inference formulation to previous approaches for stochastic optimal control and outline several advantages that this inference view brings due to its statistical nature.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here