Search Results for author: Igor Halperin

Found 11 papers, 2 papers with code

Model-Free Market Risk Hedging Using Crowding Networks

no code implementations13 Jun 2023 Vadim Zlotnikov, Jiayu Liu, Igor Halperin, Fei He, Lisa Huang

Crowding is widely regarded as one of the most important risk factors in designing portfolio strategies.

Phases of MANES: Multi-Asset Non-Equilibrium Skew Model of a Strongly Non-Linear Market with Phase Transitions

no code implementations14 Mar 2022 Igor Halperin

This paper presents an analytically tractable and practically-oriented model of non-linear dynamics of a multi-asset market in the limit of a large number of assets.

Combining Reinforcement Learning and Inverse Reinforcement Learning for Asset Allocation Recommendations

no code implementations6 Jan 2022 Igor Halperin, Jiayu Liu, Xiao Zhang

We suggest a simple practical method to combine the human and artificial intelligence to both learn best investment practices of fund managers, and provide recommendations to improve them.

reinforcement-learning Reinforcement Learning (RL)

Distributional Offline Continuous-Time Reinforcement Learning with Neural Physics-Informed PDEs (SciPhy RL for DOCTR-L)

no code implementations2 Apr 2021 Igor Halperin

A data-driven solution of the soft HJB equation uses methods of Neural PDEs and Physics-Informed Neural Networks developed in the field of Scientific Machine Learning (SciML).

Non-Equilibrium Skewness, Market Crises, and Option Pricing: Non-Linear Langevin Model of Markets with Supersymmetry

no code implementations3 Nov 2020 Igor Halperin

Borrowing ideas from supersymmetric quantum mechanics (SUSY QM), a parameterized ground state wave function (WF) of this QM system is used as a direct input to the model, which also fixes a non-linear Langevin potential.

The Inverted Parabola World of Classical Quantitative Finance: Non-Equilibrium and Non-Perturbative Finance Perspective

no code implementations9 Aug 2020 Igor Halperin

Classical quantitative finance models such as the Geometric Brownian Motion or its later extensions such as local or stochastic volatility models do not make sense when seen from a physics-based perspective, as they are all equivalent to a negative mass oscillator with a noise.

G-Learner and GIRL: Goal Based Wealth Management with Reinforcement Learning

no code implementations25 Feb 2020 Matthew Dixon, Igor Halperin

Our approach is based on G-learning - a probabilistic extension of the Q-learning method of reinforcement learning.

Management Q-Learning +2

Market Self-Learning of Signals, Impact and Optimal Trading: Invisible Hand Inference with Free Energy

1 code implementation16 May 2018 Igor Halperin, Ilya Feldshteyn

In particular, it represents, in a simple modeling framework, market views of common predictive signals, market impacts and implied optimal dynamic portfolio allocations, and can be used to assess values of private signals.

Self-Learning

The QLBS Q-Learner Goes NuQLear: Fitted Q Iteration, Inverse RL, and Option Portfolios

no code implementations17 Jan 2018 Igor Halperin

It combines the famous Q-Learning method for RL with the Black-Scholes (-Merton) model's idea of reducing the problem of option pricing and hedging to the problem of optimal rebalancing of a dynamic replicating portfolio for the option, which is made of a stock and cash.

Q-Learning reinforcement-learning +1

QLBS: Q-Learner in the Black-Scholes(-Merton) Worlds

1 code implementation13 Dec 2017 Igor Halperin

This paper presents a discrete-time option pricing model that is rooted in Reinforcement Learning (RL), and more specifically in the famous Q-Learning method of RL.

Benchmarking Model-based Reinforcement Learning +3

Inverse Reinforcement Learning for Marketing

no code implementations13 Dec 2017 Igor Halperin

Learning customer preferences from an observed behaviour is an important topic in the marketing literature.

Marketing reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.