no code implementations • 9 Apr 2025 • Minshuo Chen, Renyuan Xu, Yumin Xu, Ruixun Zhang
This work presents the first theoretical integration of factor structure with diffusion models, offering a principled approach for high-dimensional financial simulation with limited data.
no code implementations • 24 Dec 2024 • Yinbin Han, Meisam Razaviyayn, Renyuan Xu
To bridge this gap, we propose a stochastic control framework for fine-tuning diffusion models.
no code implementations • 18 Oct 2024 • Adel Javanmard, Jingwei Ji, Renyuan Xu
We show that the regret of our policy is better than both the policy that treats each security individually and the policy that treats all securities as the same.
no code implementations • 18 Aug 2024 • Yufan Chen, Lan Wu, Renyuan Xu, Ruixun Zhang
Motivated by recent empirical findings on the periodic phenomenon of aggregated market volumes in equity markets, we aim to understand the causes and consequences of periodic trading activities through a game-theoretic perspective, examining market interactions among different types of participants.
no code implementations • 18 Aug 2024 • Jodi Dianetti, Giorgio Ferrari, Renyuan Xu
To encourage exploration and facilitate learning, we introduce a regularized version of the problem by penalizing it with the cumulative residual entropy of the randomized stopping time.
no code implementations • 24 May 2024 • Haoyang Cao, Zhengqi Wu, Renyuan Xu
This paper introduces a novel stochastic control framework to enhance the capabilities of automated investment managers, or robo-advisors, by accurately inferring clients' investment preferences from past activities.
no code implementations • 28 Jan 2024 • Yinbin Han, Meisam Razaviyayn, Renyuan Xu
Our analysis is grounded in a novel parametric form of the neural network and an innovative connection between score matching and regression analysis, facilitating the application of advanced statistical and optimization techniques.
no code implementations • 23 Nov 2023 • Xin Guo, Xinyu Li, Renyuan Xu
This paper proposes and analyzes two new policy learning methods: regularized policy gradient (RPG) and iterative policy optimization (IPO), for a class of discounted linear-quadratic control (LQC) problems over an infinite time horizon with entropy regularization.
no code implementations • 22 Nov 2023 • Zhengqi Wu, Renyuan Xu
In this paper, we consider a scenario where the decision-maker seeks to optimize a general utility function of the cumulative reward in the framework of a Markov decision process (MDP).
no code implementations • 15 Mar 2023 • Yinbin Han, Meisam Razaviyayn, Renyuan Xu
Nonlinear control systems with partial information to the decision maker are prevalent in a variety of applications.
no code implementations • 15 Dec 2022 • Rama Cont, Alain Rossier, Renyuan Xu
We investigate the asymptotic properties of deep Residual networks (ResNets) as the number of layers increases.
no code implementations • 4 Aug 2022 • Jingwei Ji, Renyuan Xu, Ruihao Zhu
Then, we rigorously analyze their near-optimal regret upper bounds to show that, by leveraging the linear structure, our algorithms can dramatically reduce the regret when compared to existing methods.
no code implementations • 14 Apr 2022 • Rama Cont, Alain Rossier, Renyuan Xu
We prove linear convergence of gradient descent to a global optimum for the training of deep residual networks with constant layer width and smooth activation function.
no code implementations • 3 Mar 2022 • Rama Cont, Mihai Cucuringu, Renyuan Xu, Chao Zhang
The estimation of loss distributions for dynamic portfolios requires the simulation of scenarios representing realistic joint dynamics of their components, with particular importance devoted to the simulation of tail risk scenarios.
no code implementations • 8 Dec 2021 • Ben Hambly, Renyuan Xu, Huining Yang
In contrast to classical stochastic control theory and other analytical approaches for solving financial decision-making problems that heavily reply on model assumptions, new developments from reinforcement learning (RL) are able to make full use of the large amount of financial data with fewer model assumptions and to improve decisions in complex financial environments.
no code implementations • 5 Aug 2021 • Haotian Gu, Xin Guo, Xiaoli Wei, Renyuan Xu
This paper proposes a framework of localized training and decentralized execution to study MARL with network of states.
Multi-agent Reinforcement Learning
reinforcement-learning
+2
no code implementations • 27 Jul 2021 • Ben Hambly, Renyuan Xu, Huining Yang
We consider a general-sum N-player linear-quadratic game with stochastic dynamics over a finite horizon and prove the global convergence of the natural policy gradient method to the Nash equilibrium.
1 code implementation • 25 May 2021 • Alain-Sam Cohen, Rama Cont, Alain Rossier, Renyuan Xu
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs).
no code implementations • 20 Nov 2020 • Ben Hambly, Renyuan Xu, Huining Yang
In particular, we consider the convergence of policy gradient methods in the setting of known and unknown parameters.
no code implementations • 5 Nov 2020 • Anna Ananova, Rama Cont, Renyuan Xu
We introduce a model-free approach for analyzing the risk and return for a broad class of dynamic trading strategies, including pairs trading, mean-reversion trading and other statistical arbitrage strategies, in terms of excursions of a trading signal away from a reference level.
no code implementations • 30 Sep 2020 • Xin Guo, Renyuan Xu, Thaleia Zariphopoulou
In addition, this study leads to a policy-gradient algorithm for exploration in MFG.
no code implementations • 13 Mar 2020 • Xin Guo, Anran Hu, Renyuan Xu, Junzi Zhang
This paper presents a general mean-field game (GMFG) framework for simultaneous learning and decision-making in stochastic games with a large population.
no code implementations • 11 Mar 2020 • Jose Blanchet, Renyuan Xu, Zhengyuan Zhou
In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed.
no code implementations • 10 Feb 2020 • Haotian Gu, Xin Guo, Xiaoli Wei, Renyuan Xu
Multi-agent reinforcement learning (MARL), despite its popularity and empirical success, suffers from the curse of dimensionality.
no code implementations • NeurIPS 2019 • Zhengyuan Zhou, Renyuan Xu, Jose Blanchet
In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed.
no code implementations • 21 Mar 2019 • Xin Guo, Charles-Albert Lehalle, Renyuan Xu
This part is on the time scale of each transaction of liquid corporate bonds, and is by applying a transient impact model to estimate the price impact kernel using a non-parametric method.
no code implementations • 10 Sep 2018 • Xin Guo, Wenpin Tang, Renyuan Xu
In this paper we propose and analyze a class of $N$-player stochastic games that include finite fuel stochastic games as a special case.