no code implementations • 5 Sep 2024 • Zhengzhuo Xu, Bowen Qu, Yiyan Qi, Sinan Du, Chengjin Xu, Chun Yuan, Jian Guo
Combined with the vanilla connector, we initialize different experts in four distinct ways and adopt high-quality knowledge learning to further refine the MoE connector and LLM parameters.
1 code implementation • 2 Sep 2024 • Zewen Chen, Sunhan Xu, Yun Zeng, Haochen Guo, Jian Guo, Shuai Liu, Juan Wang, Bing Li, Weiming Hu, Dehua Liu, Hesong Li
With the rising demand for high-resolution (HR) images, No-Reference Image Quality Assessment (NR-IQA) gains more attention, as it can ecaluate image quality in real-time on mobile devices and enhance user experience.
no code implementations • 12 Aug 2024 • Jian Guo, Heung-Yeung Shum
Traditional quantitative investment research is encountering diminishing returns alongside rising labor and time costs.
1 code implementation • 31 Jul 2024 • Zhanpeng Chen, Chengjin Xu, Yiyan Qi, Jian Guo
This noise hinders accurate retrieval and generation.
no code implementations • 15 Jul 2024 • Shengjie Ma, Chengjin Xu, Xuhui Jiang, Muzhi Li, Huaren Qu, Jian Guo
Retrieval-augmented generation (RAG) has significantly advanced large language models (LLMs) by enabling dynamic information retrieval to mitigate knowledge gaps and hallucinations in generated content.
no code implementations • 11 Jul 2024 • Yuxing Tian, Yiyan Qi, Aiwen Jiang, Qi Huang, Jian Guo
Continuous-Time Dynamic Graph (CTDG) precisely models evolving real-world relationships, drawing heightened interest in dynamic graph learning across academia and industry.
no code implementations • 17 Jun 2024 • Chengjin Xu, Muzhi Li, Cehao Yang, Xuhui Jiang, Lumingyuan Tang, Yiyan Qi, Jian Guo
Knowledge Graphs (KGs) are foundational structures in many AI applications, representing entities and their interrelations through triples.
no code implementations • 24 May 2024 • Jianyuan Zhong, Zhijian Xu, Saizhuo Wang, Xiangyu Wen, Jian Guo, Qiang Xu
In quantitative investment, constructing characteristic-sorted portfolios is a crucial strategy for asset allocation.
no code implementations • 20 Apr 2024 • Hengyu Mu, Jian Guo, Chong Han, Lijuan Sun
Base on these experiment results, the impact of non-IID finger vein data on performance of federated learning are analyzed, and the superiority of PAFedFV in accuracy and robustness are demonstrated.
1 code implementation • 18 Mar 2024 • Yi Luo, Zhenghao Lin, Yuhao Zhang, Jiashuo Sun, Chen Lin, Chengjin Xu, Xiangdong Su, Yelong Shen, Jian Guo, Yeyun Gong
Subsequently, the retrieval model correlates new inputs with relevant guidelines, which guide LLMs in response generation to ensure safe and high-quality outputs, thereby aligning with human values.
no code implementations • 23 Feb 2024 • Xuhui Jiang, Yinghan Shen, Zhichao Shi, Chengjin Xu, Wei Li, Zixuan Li, Jian Guo, HuaWei Shen, Yuanzhuo Wang
To address the constraints of limited input KG data, ChatEA introduces a KG-code translation module that translates KG structures into a format understandable by LLMs, thereby allowing LLMs to utilize their extensive background knowledge to improve EA accuracy.
no code implementations • 15 Feb 2024 • Hang Yuan, Saizhuo Wang, Jian Guo
Recently, we introduced a new paradigm for alpha mining in the realm of quantitative investment, developing a new interactive alpha mining system framework, Alpha-GPT.
no code implementations • 6 Feb 2024 • Saizhuo Wang, Hang Yuan, Lionel M. Ni, Jian Guo
Autonomous agents based on Large Language Models (LLMs) that devise plans and tackle real-world challenges have gained prominence. However, tailoring these agents for specialized domains like quantitative investment remains a formidable task.
no code implementations • 2 Feb 2024 • Xuhui Jiang, Yuxing Tian, Fengrui Hua, Chengjin Xu, Yuanzhuo Wang, Jian Guo
Hallucinations in large language models (LLMs) are always seen as limitations.
no code implementations • 26 Dec 2023 • Zhengzhuo Xu, Sinan Du, Yiyan Qi, Chengjin Xu, Chun Yuan, Jian Guo
Multimodal Large Language Models (MLLMs) have shown impressive capabilities in image understanding and generation.
no code implementations • 18 Nov 2023 • Saizhuo Wang, Zhihan Liu, Zhaoran Wang, Jian Guo
Large Language Models (LLMs) are versatile, yet they often falter in tasks requiring deep and reliable reasoning due to issues like hallucinations, limiting their applicability in critical scenarios.
no code implementations • 7 Nov 2023 • Hang Zhang, Yeyun Gong, Xingwei He, Dayiheng Liu, Daya Guo, Jiancheng Lv, Jian Guo
Most dense retrieval models contain an implicit assumption: the training query-document pairs are exactly matched.
no code implementations • 7 Oct 2023 • Xuhui Jiang, Chengjin Xu, Yinghan Shen, Xun Sun, Lumingyuan Tang, Saizhuo Wang, Zhongwu Chen, Yuanzhuo Wang, Jian Guo
Knowledge graphs (KGs) are structured representations of diversified knowledge.
no code implementations • 17 Aug 2023 • Hui Niu, Siyuan Li, Jiahao Zheng, Zhouchi Lin, Jian Li, Jian Guo, Bo An
Market making (MM) has attracted significant attention in financial trading owing to its essential function in ensuring market liquidity.
no code implementations • 31 Jul 2023 • Saizhuo Wang, Hang Yuan, Leon Zhou, Lionel M. Ni, Heung-Yeung Shum, Jian Guo
One of the most important tasks in quantitative investment research is mining new alphas (effective trading signals or factors).
3 code implementations • 15 Jul 2023 • Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M. Ni, Heung-Yeung Shum, Jian Guo
Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning.
no code implementations • 25 Jun 2023 • Haohan Zhang, Fengrui Hua, Chengjin Xu, Hao Kong, Ruiting Zuo, Jian Guo
The rapid advancement of Large Language Models (LLMs) has spurred discussions about their potential to enhance quantitative trading strategies.
1 code implementation • NeurIPS 2023 • Tong Wu, Zhihao Fan, Xiao Liu, Yeyun Gong, Yelong Shen, Jian Jiao, Hai-Tao Zheng, Juntao Li, Zhongyu Wei, Jian Guo, Nan Duan, Weizhu Chen
Diffusion models have gained significant attention in the realm of image generation due to their exceptional performance.
4 code implementations • 25 Apr 2023 • Xiao-Yang Liu, Ziyi Xia, Hongyang Yang, Jiechao Gao, Daochen Zha, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
The financial market is a particularly challenging playground for deep reinforcement learning due to its unique feature of dynamic datasets.
1 code implementation • 23 Apr 2023 • Jiashuo Sun, Yi Luo, Yeyun Gong, Chen Lin, Yelong Shen, Jian Guo, Nan Duan
By utilizing iterative bootstrapping, our approach enables LLMs to autonomously rectify errors, resulting in more precise and comprehensive reasoning chains.
1 code implementation • 7 Apr 2023 • Xuhui Jiang, Chengjin Xu, Yinghan Shen, Yuanzhuo Wang, Fenglong Su, Fei Sun, Zixuan Li, Zhichao Shi, Jian Guo, HuaWei Shen
Firstly, we address the oversimplified heterogeneity settings of current datasets and propose two new HHKG datasets that closely mimic practical EA scenarios.
2 code implementations • 14 Dec 2022 • Jiashuo Sun, Hang Zhang, Chen Lin, Xiangdong Su, Yeyun Gong, Jian Guo
For the retriever, we adopt a number-aware negative sampling strategy to enable the retriever to be more discriminative on key numerical facts.
Ranked #1 on Conversational Question Answering on ConvFinQA
no code implementations • 13 Dec 2022 • Jian Guo, Saizhuo Wang, Lionel M. Ni, Heung-Yeung Shum
Quant has become one of the mainstream investment methodologies over the past decades, and has experienced three generations: Quant 1. 0, trading by mathematical modeling to discover mis-priced assets in markets; Quant 2. 0, shifting quant research pipeline from small ``strategy workshops'' to large ``alpha factories''; Quant 3. 0, applying deep learning techniques to discover complex nonlinear pricing rules.
4 code implementations • 6 Nov 2022 • Xiao-Yang Liu, Ziyi Xia, Jingyang Rui, Jiechao Gao, Hongyang Yang, Ming Zhu, Christina Dan Wang, Zhaoran Wang, Jian Guo
However, establishing high-quality market environments and benchmarks for financial reinforcement learning is challenging due to three major factors, namely, low signal-to-noise ratio of financial data, survivorship bias of historical data, and model overfitting in the backtesting stage.
1 code implementation • 18 Oct 2022 • Shuai Fan, Chen Lin, Haonan Li, Zhenghao Lin, Jinsong Su, Hang Zhang, Yeyun Gong, Jian Guo, Nan Duan
Most existing pre-trained language representation models (PLMs) are sub-optimal in sentiment analysis tasks, as they capture the sentiment information from word-level while under-considering sentence-level information.
1 code implementation • 15 Sep 2022 • Hao Sun, Lei Han, Rui Yang, Xiaoteng Ma, Jian Guo, Bolei Zhou
We validate our insight on a range of RL tasks and show its improvement over baselines: (1) In offline RL, the conservative exploitation leads to improved performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to tackle the exploration-exploitation dilemma for better sample efficiency; (3) In discrete control tasks, a negative reward shifting yields an improvement over the curiosity-based exploration method.
no code implementations • 6 Sep 2022 • Jian Guo, Jiaxiang Tu, Hengyi Ren, Chong Han, Lijuan Sun
In this paper, we propose a multimodal biometric fusion recognition algorithm based on fingerprints and finger veins (Fingerprint Finger Veins-Channel Spatial Attention Fusion Module, FPV-CSAFM).
no code implementations • 3 Mar 2022 • Feng Li, Hao Zhang, Yi-Fan Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Pengchuan Zhang, Lei Zhang
This survey is inspired by the remarkable progress in both computer vision and natural language processing, and recent trends shifting from single modality processing to multiple modality comprehension.
16 code implementations • CVPR 2022 • Feng Li, Hao Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang
Our method is universal and can be easily plugged into any DETR-like methods by adding dozens of lines of code to achieve a remarkable improvement.
Ranked #82 on Object Detection on COCO minival
1 code implementation • 13 Dec 2021 • Xiao-Yang Liu, Jingyang Rui, Jiechao Gao, Liuqing Yang, Hongyang Yang, Zhaoran Wang, Christina Dan Wang, Jian Guo
In this paper, we present a FinRL-Meta framework that builds a universe of market environments for data-driven financial reinforcement learning.
1 code implementation • 11 Dec 2021 • Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo, Michael I. Jordan
In this paper, we present a scalable and elastic library ElegantRL-podracer for cloud-native deep reinforcement learning, which efficiently supports millions of GPU cores to carry out massively parallel training at multiple levels.
no code implementations • 7 Nov 2021 • Zechu Li, Xiao-Yang Liu, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo
Unfortunately, the steep learning curve and the difficulty in quick modeling and agile development are impeding finance researchers from using deep reinforcement learning in quantitative trading.
no code implementations • 29 Sep 2021 • Hao Sun, Lei Han, Jian Guo, Bolei Zhou
We verify our insight on a range of tasks: (1) In offline RL, the conservative exploitation leads to improved learning performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to trade-off between exploration and exploitation thus improving learning efficiency; (3) In online RL with discrete action space, a negative reward shifting brings an improvement over the previous curiosity-based exploration method.
no code implementations • 22 Jul 2021 • Huyen N. Nguyen, Jake Gonzalez, Jian Guo, Ngan V. T. Nguyen, Tommy Dang
This paper presents VisMCA, an interactive visual analytics system that supports deepening understanding in ML results, augmenting users' capabilities in correcting misclassification, and providing an analysis of underlying patterns, in response to the VAST Challenge 2020 Mini-Challenge 2.
1 code implementation • 9 Mar 2021 • Mario Coppola, Jian Guo, Eberhard Gill, Guido C. H. E. de Croon
The framework is based on the automatic extraction of two distinct models: 1) a neural network model trained to estimate the relationship between the robots' sensor readings and the global performance of the swarm, and 2) a probabilistic state transition model that explicitly models the local state transitions (i. e., transitions in observations from the perspective of a single robot in the swarm) given a policy.
1 code implementation • 21 May 2020 • Hao Sun, Zhenghao Peng, Bo Dai, Jian Guo, Dahua Lin, Bolei Zhou
In problem-solving, we humans can come up with multiple novel solutions to the same problem.
3 code implementations • 9 Jul 2019 • Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, Shuai Zheng, Yi Zhu
We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating).
no code implementations • 18 Apr 2018 • Mario Coppola, Jian Guo, Eberhard K. A. Gill, Guido C. H. E. de Croon
We then formally show that these local states can only coexist when the global desired pattern is achieved and that, until this occurs, there is always a sequence of actions that will lead from the current pattern to the desired pattern.
Robotics
no code implementations • 24 Jun 2015 • Jian Guo, Stephen Gould
We report on the methods used in our recent DeepEnsembleCoco submission to the PASCAL VOC 2012 challenge, which achieves state-of-the-art performance on the object detection task.