Model Based Reinforcement Learning with Non-Gaussian Environment Dynamics and its Application to Portfolio Optimization

23 Jan 2023  ·  Huifang Huang, Ting Gao, Pengbo Li, Jin Guo, Peng Zhang, Nan Du ·

With the fast development of quantitative portfolio optimization in financial engineering, lots of AI-based algorithmic trading strategies have demonstrated promising results, among which reinforcement learning begins to manifest competitive advantages. However, the environment from real financial markets is complex and hard to be fully simulated, considering the observation of abrupt transitions, unpredictable hidden causal factors, heavy tail properties and so on. Thus, in this paper, first, we adopt a heavy-tailed preserving normalizing flows to simulate high-dimensional joint probability of the complex trading environment and develop a model-based reinforcement learning framework to better understand the intrinsic mechanisms of quantitative online trading. Second, we experiment with various stocks from three different financial markets (Dow, NASDAQ and S&P) and show that among these three financial markets, Dow gets the best performance based on various evaluation metrics under our back-testing system. Especially, our proposed method is able to mitigate the impact of unpredictable financial market crises during the COVID-19 pandemic period, resulting in a lower maximum drawdown. Third, we also explore the explanation of our RL algorithm. (1), we utilize the pattern causality method to study the interactive relation among different stocks in the environment. (2), We analyze the dynamic loss and actor loss to ensure the convergence of our strategies. (3), by visualizing high dimensional state transition data comparisons from real and virtual buffers with t-SNE, we uncover some effective patterns of better portfolio optimization strategies. (4), we also utilize eigenvalue analysis to study the convergence properties of the environmen's model.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods