Exploratory Control with Tsallis Entropy for Latent Factor Models

14 Nov 2022  ·  Ryan Donnelly, Sebastian Jaimungal ·

We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis Entropy and derive the optimal distribution over states - which we prove is $q$-Gaussian distributed with location characterized through the solution of an FBS$\Delta$E and FBSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft $Q$-learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here