Search Results for author: Hongyang Yang

Found 12 papers, 11 papers with code

Practical Deep Reinforcement Learning Approach for Stock Trading

9 code implementations19 Nov 2018 Xiao-Yang Liu, Zhuoran Xiong, Shan Zhong, Hongyang Yang, Anwar Walid

We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return.

reinforcement-learning Reinforcement Learning (RL)

DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News

4 code implementations20 Dec 2019 Xinyi Li, Yinchuan Li, Hongyang Yang, Liuqing Yang, Xiao-Yang Liu

In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism.

Stock Prediction Stock Price Prediction

FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance

6 code implementations19 Nov 2020 Xiao-Yang Liu, Hongyang Yang, Qian Chen, Runjia Zhang, Liuqing Yang, Bowen Xiao, Christina Dan Wang

In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies.

reinforcement-learning Reinforcement Learning (RL) +1

FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance

no code implementations7 Nov 2021 Xiao-Yang Liu, Hongyang Yang, Jiechao Gao, Christina Dan Wang

In this paper, we present the first open-source framework \textit{FinRL} as a full pipeline to help quantitative traders overcome the steep learning curve.

Friction reinforcement-learning +1

FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

4 code implementations6 Nov 2022 Xiao-Yang Liu, Ziyi Xia, Jingyang Rui, Jiechao Gao, Hongyang Yang, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo

However, establishing high-quality market environments and benchmarks for financial reinforcement learning is challenging due to three major factors, namely, low signal-to-noise ratio of financial data, survivorship bias of historical data, and model overfitting in the backtesting stage.

reinforcement-learning Reinforcement Learning (RL)

Dynamic Datasets and Market Environments for Financial Reinforcement Learning

4 code implementations25 Apr 2023 Xiao-Yang Liu, Ziyi Xia, Hongyang Yang, Jiechao Gao, Daochen Zha, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo

The financial market is a particularly challenging playground for deep reinforcement learning due to its unique feature of dynamic datasets.

reinforcement-learning

FinGPT: Open-Source Financial Large Language Models

2 code implementations9 Jun 2023 Hongyang Yang, Xiao-Yang Liu, Christina Dan Wang

While proprietary models like BloombergGPT have taken advantage of their unique data accumulation, such privileged access calls for an open-source alternative to democratize Internet-scale financial data.

Algorithmic Trading Language Modelling +1

Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models

1 code implementation22 Jun 2023 Boyu Zhang, Hongyang Yang, Xiao-Yang Liu

Sentiment analysis is a vital tool for uncovering insights from financial articles, news, and social media, shaping our understanding of market movements.

Sentiment Analysis

FinGPT: Democratizing Internet-scale Data for Financial Large Language Models

1 code implementation19 Jul 2023 Xiao-Yang Liu, Guoxuan Wang, Hongyang Yang, Daochen Zha

In light of this, we aim to democratize Internet-scale financial data for LLMs, which is an open challenge due to diverse data sources, low signal-to-noise ratio, and high time-validity.

Algorithmic Trading Sentiment Analysis

FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets

1 code implementation7 Oct 2023 Neng Wang, Hongyang Yang, Christina Dan Wang

This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models, specifically adapted for financial contexts.

Benchmarking named-entity-recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.