no code implementations • 26 Mar 2025 • Yupeng Cao, Haohang Li, Yangyang Yu, Shashidhar Reddy Javaji, Yueru He, Jimin Huang, Zining Zhu, Qianqian Xie, Xiao-Yang Liu, Koduvayur Subbalakshmi, Meikang Qiu, Sophia Ananiadou, Jian-Yun Nie
We first define three tasks based on the unique characteristics of the financial domain: 1) ASR for short financial audio, 2) ASR for long financial audio, and 3) summarization of long financial audio.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 17 Feb 2025 • Guojun Xiong, Zhiyang Deng, Keyi Wang, Yupeng Cao, Haohang Li, Yangyang Yu, Xueqing Peng, Mingquan Lin, Kaleb E Smith, Xiao-Yang Liu, Jimin Huang, Sophia Ananiadou, Qianqian Xie
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks.
no code implementations • 27 Jan 2025 • Zhiyuan Wang, Chunlin Feng, Christopher Poon, Lijian Huang, Xingjian Zhao, Yao Ma, Tianfan Fu, Xiao-Yang Liu
Quantum computing promises advantages over classical computing.
no code implementations • 16 Dec 2024 • Dannong Wang, Daniel Kim, Bo Jin, Xingjian Zhao, Tianfan Fu, Steve Yang, Xiao-Yang Liu
Finetuned large language models (LLMs) have shown remarkable performance in financial tasks, such as sentiment analysis and information retrieval.
no code implementations • 13 Sep 2024 • Dannong Wang, Jintai Chen, Zhiding Liang, Tianfan Fu, Xiao-Yang Liu
To address this issue, in this paper, we introduce a novel approach using the reinforcement learning method with quantum-inspired simulated annealing policy neural network to navigate the vast discrete space of chemical structures intelligently.
no code implementations • 20 Aug 2024 • Jimin Huang, Mengxi Xiao, Dong Li, Zihao Jiang, Yuzhe Yang, Yifei Zhang, Lingfei Qian, Yan Wang, Xueqing Peng, Yang Ren, Ruoyu Xiang, Zhengyu Chen, Xiao Zhang, Yueru He, Weiguang Han, Shunian Chen, Lihang Shen, Daniel Kim, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Haohang Li, Duanyu Feng, Yongfu Dai, VijayaSai Somasundaram, Peng Lu, Guojun Xiong, Zhiwei Liu, Zheheng Luo, Zhiyuan Yao, Ruey-Ling Weng, Meikang Qiu, Kaleb E Smith, Honghai Yu, Yanzhao Lai, Min Peng, Jian-Yun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang, Alejandro Lopez-Lira, Qianqian Xie, Sophia Ananiadou, Junichi Tsujii
Financial LLMs hold promise for advancing financial tasks and domain-specific applications.
no code implementations • 21 Feb 2024 • Xiao-Yang Liu, Jie Zhang, Guoxuan Wang, Weiqing Tong, Anwar Walid
However, the resulting model still consumes a large amount of GPU memory.
2 code implementations • 20 Feb 2024 • Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, Jimin Huang
Our evaluation of 15 representative LLMs, including GPT-4, ChatGPT, and the latest Gemini, reveals several key findings: While LLMs excel in IE and textual analysis, they struggle with advanced reasoning and complex tasks like text generation and forecasting.
1 code implementation • 12 Feb 2024 • Xiao Zhang, Ruoyu Xiang, Chenhan Yuan, Duanyu Feng, Weiguang Han, Alejandro Lopez-Lira, Xiao-Yang Liu, Sophia Ananiadou, Min Peng, Jimin Huang, Qianqian Xie
We evaluate our model and existing LLMs using FLARE-ES, the first comprehensive bilingual evaluation benchmark with 21 datasets covering 9 tasks.
1 code implementation • 29 Dec 2023 • Xiao-Yang Liu, Rongyi Zhu, Daochen Zha, Jiechao Gao, Shan Zhong, Matt White, Meikang Qiu
The surge in interest and application of large language models (LLMs) has sparked a drive to fine-tune these models to suit specific applications, such as finance and medical science.
no code implementations • 27 Nov 2023 • Haoqiang Kang, Xiao-Yang Liu
In this paper, we provide an empirical examination of LLMs' hallucination behaviors in financial tasks.
2 code implementations • 6 Oct 2023 • Boyu Zhang, Hongyang Yang, Tianyu Zhou, Ali Babar, Xiao-Yang Liu
Financial sentiment analysis is critical for valuation and investment decision-making.
1 code implementation • 19 Jul 2023 • Xiao-Yang Liu, Guoxuan Wang, Hongyang Yang, Daochen Zha
In light of this, we aim to democratize Internet-scale financial data for LLMs, which is an open challenge due to diverse data sources, low signal-to-noise ratio, and high time-validity.
1 code implementation • 22 Jun 2023 • Boyu Zhang, Hongyang Yang, Xiao-Yang Liu
Sentiment analysis is a vital tool for uncovering insights from financial articles, news, and social media, shaping our understanding of market movements.
2 code implementations • 9 Jun 2023 • Hongyang Yang, Xiao-Yang Liu, Christina Dan Wang
While proprietary models like BloombergGPT have taken advantage of their unique data accumulation, such privileged access calls for an open-source alternative to democratize Internet-scale financial data.
4 code implementations • 25 Apr 2023 • Xiao-Yang Liu, Ziyi Xia, Hongyang Yang, Jiechao Gao, Daochen Zha, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
The financial market is a particularly challenging playground for deep reinforcement learning due to its unique feature of dynamic datasets.
no code implementations • 4 Feb 2023 • Xiao-Yang Liu, Ming Zhu, Sem Borst, Anwar Walid
In this paper, we investigate deep reinforcement learning to control traffic lights, and both theoretical analysis and numerical experiments show that the intelligent behavior ``greenwave" (i. e., a vehicle will see a progressive cascade of green lights, and not have to brake at any intersection) emerges naturally a grid road network, which is proved to be the optimal policy in an avenue with multiple cross streets.
4 code implementations • 6 Nov 2022 • Xiao-Yang Liu, Ziyi Xia, Jingyang Rui, Jiechao Gao, Hongyang Yang, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
However, establishing high-quality market environments and benchmarks for financial reinforcement learning is challenging due to three major factors, namely, low signal-to-noise ratio of financial data, survivorship bias of historical data, and model overfitting in the backtesting stage.
1 code implementation • 12 Sep 2022 • Berend Jelmer Dirk Gort, Xiao-Yang Liu, Xinghang Sun, Jiechao Gao, Shuaiyu Chen, Christina Dan Wang
Designing profitable and reliable trading strategies is challenging in the highly volatile cryptocurrency market.
1 code implementation • 13 Dec 2021 • Xiao-Yang Liu, Jingyang Rui, Jiechao Gao, Liuqing Yang, Hongyang Yang, Zhaoran Wang, Christina Dan Wang, Jian Guo
In this paper, we present a FinRL-Meta framework that builds a universe of market environments for data-driven financial reinforcement learning.
1 code implementation • 11 Dec 2021 • Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo, Michael I. Jordan
In this paper, we present a scalable and elastic library ElegantRL-podracer for cloud-native deep reinforcement learning, which efficiently supports millions of GPU cores to carry out massively parallel training at multiple levels.
no code implementations • 7 Nov 2021 • Xiao-Yang Liu, Hongyang Yang, Jiechao Gao, Christina Dan Wang
In this paper, we present the first open-source framework \textit{FinRL} as a full pipeline to help quantitative traders overcome the steep learning curve.
no code implementations • 7 Nov 2021 • Zechu Li, Xiao-Yang Liu, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo
Unfortunately, the steep learning curve and the difficulty in quick modeling and agile development are impeding finance researchers from using deep reinforcement learning in quantitative trading.
no code implementations • 7 Nov 2021 • Mao Guan, Xiao-Yang Liu
In particular, we quantify the prediction power by calculating the linear correlations between the feature weights of a DRL agent and the reference feature weights, and similarly for machine learning methods.
no code implementations • CVPR 2021 • Miao Yin, Siyu Liao, Xiao-Yang Liu, Xiaodong Wang, Bo Yuan
Although various prior works have been proposed to reduce the RNN model sizes, executing RNN models in resource-restricted environments is still a very challenging problem.
no code implementations • 7 Mar 2021 • Xiao-Yang Liu, Ming Zhu
Graph data completion is a fundamentally important issue as data generally has a graph structure, e. g., social networks, recommendation systems, and the Internet of Things.
1 code implementation • 8 Jan 2021 • Fanjie Kong, Xiao-Yang Liu, Ricardo Henao
In the end, our experimental results indicate that tensor network models are effective for tiny object classification problem and potentially will beat state-of-the-art.
no code implementations • 10 Dec 2020 • Daizong Liu, Shuangjie Xu, Xiao-Yang Liu, Zichuan Xu, Wei Wei, Pan Zhou
To capture temporal information from previous frames, we use a memory network to refine the mask of current frame by retrieving historic masks in a temporal graph.
6 code implementations • 19 Nov 2020 • Xiao-Yang Liu, Hongyang Yang, Qian Chen, Runjia Zhang, Liuqing Yang, Bowen Xiao, Christina Dan Wang
In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies.
1 code implementation • 4 Aug 2020 • Daizong Liu, Xiaoye Qu, Xiao-Yang Liu, Jianfeng Dong, Pan Zhou, Zichuan Xu
To this end, we propose a novel Cross- and Self-Modal Graph Attention Network (CSMGAN) that recasts this task as a process of iterative messages passing over a joint graph.
no code implementations • 9 May 2020 • Miao Yin, Siyu Liao, Xiao-Yang Liu, Xiaodong Wang, Bo Yuan
Recurrent Neural Networks (RNNs) have been widely used in sequence analysis and modeling.
4 code implementations • 20 Dec 2019 • Xinyi Li, Yinchuan Li, Hongyang Yang, Liuqing Yang, Xiao-Yang Liu
In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism.
no code implementations • 14 Dec 2019 • Wenhang Bao, Xiao-Yang Liu
We demonstrate three types of directed communications to show the effect of directions of social influence on the entire network utility and individual utility.
no code implementations • 16 Oct 2019 • Wenhao Zhang, Wentian Bao, Xiao-Yang Liu, Keping Yang, Quan Lin, Hong Wen, Ramin Ramezani
In addition, our methods are based on the multi-task learning framework and mitigate the data sparsity issue.
no code implementations • 3 Aug 2019 • Xinyi Li, Yinchuan Li, Xiao-Yang Liu, Christina Dan Wang
In this paper, we propose a novel deep neural network Mid-LSTM for midterm stock prediction, which incorporates the market trend as hidden states.
5 code implementations • 24 Jun 2019 • Wenhang Bao, Xiao-Yang Liu
Liquidation is the process of selling a large number of shares of one stock sequentially within a given time frame, taking into consideration the costs arising from market impact and a trader's risk aversion.
no code implementations • 21 Jun 2019 • Xinyi Li, Yinchuan Li, Yuancheng Zhan, Xiao-Yang Liu
Portfolio allocation is crucial for investment companies.
Statistical Finance
no code implementations • 12 Jun 2019 • Ming Zhu, Xiao-Yang Liu, Xiaodong Wang
Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G communication infrastructure in future smart cities.
1 code implementation • 28 Jan 2019 • Zihan Ding, Xiao-Yang Liu, Miao Yin, Linghe Kong
Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions.
no code implementations • 3 Dec 2018 • Xiao-Yang Liu, Zihan Ding, Sem Borst, Anwar Walid
Intelligent Transportation Systems (ITSs) are envisioned to play a critical role in improving traffic flow and reducing congestion, which is a pervasive issue impacting urban areas around the globe.
9 code implementations • 19 Nov 2018 • Xiao-Yang Liu, Zhuoran Xiong, Shan Zhong, Hongyang Yang, Anwar Walid
We explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return.
no code implementations • 18 Nov 2018 • Weijun Lu, Xiao-Yang Liu, Qingwei Wu, Yue Sun, Anwar Walid
We propose a novel multilinear dynamical system (MLDS) in a transform domain, named $\mathcal{L}$-MLDS, to model tensor time series.
no code implementations • 13 Feb 2018 • Xiao-Yang Liu
In this paper, we propose a novel information scaling law scheme that can interpret the network's inner organization by information theory.
no code implementations • 27 Dec 2017 • Ming Zhu, Xiao-Yang Liu, Xiaodong Wang
As efficient traffic-management platforms, public vehicle (PV) systems are envisioned to be a promising approach to solving traffic congestions and pollutions for future smart cities.
no code implementations • 13 Dec 2017 • Tao Deng, Xiao-Yang Liu, Feng Qian, Anwar Walid
The recently proposed transform-based tensor model is more appropriate for sensory data processing, as it helps exploit the geometric structures of the three-dimensional target and improve the recovery precision.
no code implementations • 7 Nov 2017 • Chenxiao Zhu, Lingqing Xu, Xiao-Yang Liu, Feng Qian
Global Positioning System (GPS) becomes invalid in indoor environments due to the non-line-of-sight issue, so it is urgent to develop a real-time high-accuracy localization approach for smartphones.
4 code implementations • 3 May 2017 • Xiao-Yang Liu, Xiaodong Wang
The multidimensional feature and huge volume of big data put urgent requirements to the development of multilinear modeling tools and efficient algorithms.
Numerical Analysis Information Theory Information Theory
no code implementations • 8 Apr 2017 • Ming-Jun Su, Jingbo Chang, Feng Qian, Guangmin Hu, Xiao-Yang Liu
Seismic data denoising is vital to geophysical applications and the transform-based function method is one of the most widely used techniques.
no code implementations • 28 Mar 2017 • Fei Jiang, Xiao-Yang Liu, Hongtao Lu, Ruimin Shen
Sparse coding (SC) is an automatic feature extraction and selection technique that is widely used in unsupervised learning.
no code implementations • 27 Mar 2017 • Fei Jiang, Xiao-Yang Liu, Hongtao Lu, Ruimin Shen
Sparse coding (SC) is an unsupervised learning scheme that has received an increasing amount of interests in recent years.
no code implementations • 5 Oct 2016 • Xiao-Yang Liu, Shuchin Aeron, Vaneet Aggarwal, Xiaodong Wang
The low-tubal-rank tensor model has been recently proposed for real-world multidimensional data.
no code implementations • 10 Aug 2015 • Xiao-Yang Liu, Shuchin Aeron, Vaneet Aggarwal, Xiaodong Wang, Min-You Wu
In contrast to several existing work that rely on random sampling, this paper shows that adaptivity in sampling can lead to significant improvements in localization accuracy.