Search Results for author: Xiao Qi Shi

Found 5 papers, 2 papers with code

Information Content Exploration

no code implementations10 Oct 2023 Jacob Chmura, Hasham Burhani, Xiao Qi Shi

We expand on this topic and propose a new intrinsic reward that systemically quantifies exploratory behavior and promotes state coverage by maximizing the information content of a trajectory taken by an agent.

Efficient Exploration reinforcement-learning

Scope Loss for Imbalanced Classification and RL Exploration

no code implementations8 Aug 2023 Hasham Burhani, Xiao Qi Shi, Jonathan Jaegerman, Daniel Balicki

From our analysis of the aforementioned problems we derive a novel loss function for reinforcement learning and supervised classification.

Classification imbalanced classification +1

TradeR: Practical Deep Hierarchical Reinforcement Learning for Trade Execution

no code implementations16 Feb 2021 Karush Suri, Xiao Qi Shi, Konstantinos Plataniotis, Yuri Lawryshyn

We present Trade Execution using Reinforcement Learning (TradeR) which aims to address two such practical challenges of catastrophy and surprise minimization by formulating trading as a real-world hierarchical RL problem.

Hierarchical Reinforcement Learning reinforcement-learning +1

Energy-based Surprise Minimization for Multi-Agent Value Factorization

1 code implementation16 Sep 2020 Karush Suri, Xiao Qi Shi, Konstantinos Plataniotis, Yuri Lawryshyn

(2) EMIX highlights a practical use of energy functions in MARL with theoretical guarantees and experiment validations of the energy operator.

Multi-agent Reinforcement Learning Q-Learning +2

Maximum Mutation Reinforcement Learning for Scalable Control

2 code implementations24 Jul 2020 Karush Suri, Xiao Qi Shi, Konstantinos N. Plataniotis, Yuri A. Lawryshyn

Advances in Reinforcement Learning (RL) have demonstrated data efficiency and optimal control over large state spaces at the cost of scalable performance.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.