1 code implementation • 16 May 2023 • Tong Wu, Zhihao Fan, Xiao Liu, Yeyun Gong, Yelong Shen, Jian Jiao, Hai-Tao Zheng, Juntao Li, Zhongyu Wei, Jian Guo, Nan Duan, Weizhu Chen
Diffusion models have gained significant attention in the realm of image generation due to their exceptional performance.
1 code implementation • 25 Apr 2023 • Xiao-Yang Liu, Ziyi Xia, Hongyang Yang, Jiechao Gao, Daochen Zha, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
The financial market is a particularly challenging playground for deep reinforcement learning due to its unique feature of dynamic datasets.
1 code implementation • 23 Apr 2023 • Jiashuo Sun, Yi Luo, Yeyun Gong, Chen Lin, Yelong Shen, Jian Guo, Nan Duan
By utilizing iterative bootstrapping, our approach enables LLMs to autonomously rectify errors, resulting in more precise and comprehensive reasoning chains.
1 code implementation • 14 Dec 2022 • Jiashuo Sun, Hang Zhang, Chen Lin, Yeyun Gong, Jian Guo, Nan Duan
For the retriever, we adopt a number-aware negative sampling strategy to enable the retriever to be more discriminative on key numerical facts.
Ranked #1 on
Conversational Question Answering
on ConvFinQA
no code implementations • 13 Dec 2022 • Jian Guo, Saizhuo Wang, Lionel M. Ni, Heung-Yeung Shum
Quant has become one of the mainstream investment methodologies over the past decades, and has experienced three generations: Quant 1. 0, trading by mathematical modeling to discover mis-priced assets in markets; Quant 2. 0, shifting quant research pipeline from small ``strategy workshops'' to large ``alpha factories''; Quant 3. 0, applying deep learning techniques to discover complex nonlinear pricing rules.
4 code implementations • 6 Nov 2022 • Xiao-Yang Liu, Ziyi Xia, Jingyang Rui, Jiechao Gao, Hongyang Yang, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
However, establishing high-quality market environments and benchmarks for financial reinforcement learning is challenging due to three major factors, namely, low signal-to-noise ratio of financial data, survivorship bias of historical data, and model overfitting in the backtesting stage.
1 code implementation • 18 Oct 2022 • Shuai Fan, Chen Lin, Haonan Li, Zhenghao Lin, Jinsong Su, Hang Zhang, Yeyun Gong, Jian Guo, Nan Duan
Most existing pre-trained language representation models (PLMs) are sub-optimal in sentiment analysis tasks, as they capture the sentiment information from word-level while under-considering sentence-level information.
1 code implementation • 15 Sep 2022 • Hao Sun, Lei Han, Rui Yang, Xiaoteng Ma, Jian Guo, Bolei Zhou
We validate our insight on a range of RL tasks and show its improvement over baselines: (1) In offline RL, the conservative exploitation leads to improved performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to tackle the exploration-exploitation dilemma for better sample efficiency; (3) In discrete control tasks, a negative reward shifting yields an improvement over the curiosity-based exploration method.
no code implementations • 6 Sep 2022 • Jian Guo, Jiaxiang Tu, Hengyi Ren, Chong Han, Lijuan Sun
In this paper, we propose a multimodal biometric fusion recognition algorithm based on fingerprints and finger veins (Fingerprint Finger Veins-Channel Spatial Attention Fusion Module, FPV-CSAFM).
no code implementations • 3 Mar 2022 • Feng Li, Hao Zhang, Yi-Fan Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Pengchuan Zhang, Lei Zhang
This survey is inspired by the remarkable progress in both computer vision and natural language processing, and recent trends shifting from single modality processing to multiple modality comprehension.
11 code implementations • CVPR 2022 • Feng Li, Hao Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang
Our method is universal and can be easily plugged into any DETR-like methods by adding dozens of lines of code to achieve a remarkable improvement.
1 code implementation • 13 Dec 2021 • Xiao-Yang Liu, Jingyang Rui, Jiechao Gao, Liuqing Yang, Hongyang Yang, Zhaoran Wang, Christina Dan Wang, Jian Guo
In this paper, we present a FinRL-Meta framework that builds a universe of market environments for data-driven financial reinforcement learning.
1 code implementation • 11 Dec 2021 • Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo, Michael I. Jordan
In this paper, we present a scalable and elastic library ElegantRL-podracer for cloud-native deep reinforcement learning, which efficiently supports millions of GPU cores to carry out massively parallel training at multiple levels.
no code implementations • 7 Nov 2021 • Zechu Li, Xiao-Yang Liu, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo
Unfortunately, the steep learning curve and the difficulty in quick modeling and agile development are impeding finance researchers from using deep reinforcement learning in quantitative trading.
no code implementations • 29 Sep 2021 • Hao Sun, Lei Han, Jian Guo, Bolei Zhou
We verify our insight on a range of tasks: (1) In offline RL, the conservative exploitation leads to improved learning performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to trade-off between exploration and exploitation thus improving learning efficiency; (3) In online RL with discrete action space, a negative reward shifting brings an improvement over the previous curiosity-based exploration method.
no code implementations • 22 Jul 2021 • Huyen N. Nguyen, Jake Gonzalez, Jian Guo, Ngan V. T. Nguyen, Tommy Dang
This paper presents VisMCA, an interactive visual analytics system that supports deepening understanding in ML results, augmenting users' capabilities in correcting misclassification, and providing an analysis of underlying patterns, in response to the VAST Challenge 2020 Mini-Challenge 2.
1 code implementation • 9 Mar 2021 • Mario Coppola, Jian Guo, Eberhard Gill, Guido C. H. E. de Croon
The framework is based on the automatic extraction of two distinct models: 1) a neural network model trained to estimate the relationship between the robots' sensor readings and the global performance of the swarm, and 2) a probabilistic state transition model that explicitly models the local state transitions (i. e., transitions in observations from the perspective of a single robot in the swarm) given a policy.
1 code implementation • 21 May 2020 • Hao Sun, Zhenghao Peng, Bo Dai, Jian Guo, Dahua Lin, Bolei Zhou
In problem-solving, we humans can come up with multiple novel solutions to the same problem.
4 code implementations • 9 Jul 2019 • Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, Shuai Zheng, Yi Zhu
We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating).
no code implementations • 18 Apr 2018 • Mario Coppola, Jian Guo, Eberhard K. A. Gill, Guido C. H. E. de Croon
We then formally show that these local states can only coexist when the global desired pattern is achieved and that, until this occurs, there is always a sequence of actions that will lead from the current pattern to the desired pattern.
Robotics
no code implementations • 24 Jun 2015 • Jian Guo, Stephen Gould
We report on the methods used in our recent DeepEnsembleCoco submission to the PASCAL VOC 2012 challenge, which achieves state-of-the-art performance on the object detection task.