no code implementations • 6 Feb 2024 • Saizhuo Wang, Hang Yuan, Lionel M. Ni, Jian Guo
Autonomous agents based on Large Language Models (LLMs) that devise plans and tackle real-world challenges have gained prominence. However, tailoring these agents for specialized domains like quantitative investment remains a formidable task.
no code implementations • 31 Jul 2023 • Saizhuo Wang, Hang Yuan, Leon Zhou, Lionel M. Ni, Heung-Yeung Shum, Jian Guo
One of the most important tasks in quantitative investment research is mining new alphas (effective trading signals or factors).
3 code implementations • 15 Jul 2023 • Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M. Ni, Heung-Yeung Shum, Jian Guo
Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning.
1 code implementation • 13 Mar 2023 • Feng Li, Ailing Zeng, Shilong Liu, Hao Zhang, Hongyang Li, Lei Zhang, Lionel M. Ni
Recent DEtection TRansformer-based (DETR) models have obtained remarkable performance.
1 code implementation • CVPR 2023 • Hao Zhang, Feng Li, Huaizhe xu, Shijia Huang, Shilong Liu, Lionel M. Ni, Lei Zhang
We present a mask-piloted Transformer which improves masked-attention in Mask2Former for image segmentation.
no code implementations • 18 Feb 2023 • Xili Dai, Ke Chen, Shengbang Tong, Jingyuan Zhang, Xingjian Gao, Mingyang Li, Druv Pai, Yuexiang Zhai, Xiaojun Yuan, Heung-Yeung Shum, Lionel M. Ni, Yi Ma
Our method is arguably the first to demonstrate that a concatenation of multiple convolution sparse coding/decoding layers leads to an interpretable and effective autoencoder for modeling the distribution of large-scale natural image datasets.
no code implementations • CVPR 2023 • Feng Li, Ailing Zeng, Shilong Liu, Hao Zhang, Hongyang Li, Lei Zhang, Lionel M. Ni
Recent DEtection TRansformer-based (DETR) models have obtained remarkable performance.
no code implementations • 13 Dec 2022 • Jian Guo, Saizhuo Wang, Lionel M. Ni, Heung-Yeung Shum
Quant has become one of the mainstream investment methodologies over the past decades, and has experienced three generations: Quant 1. 0, trading by mathematical modeling to discover mis-priced assets in markets; Quant 2. 0, shifting quant research pipeline from small ``strategy workshops'' to large ``alpha factories''; Quant 3. 0, applying deep learning techniques to discover complex nonlinear pricing rules.
9 code implementations • CVPR 2023 • Feng Li, Hao Zhang, Huaizhe xu, Shilong Liu, Lei Zhang, Lionel M. Ni, Heung-Yeung Shum
In this paper we present Mask DINO, a unified object detection and segmentation framework.
Ranked #1 on Panoptic Segmentation on COCO test-dev
15 code implementations • 7 Mar 2022 • Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, Heung-Yeung Shum
Compared to other models on the leaderboard, DINO significantly reduces its model size and pre-training data size while achieving better results.
Ranked #1 on Real-Time Object Detection on COCO 2017 val
no code implementations • 3 Mar 2022 • Feng Li, Hao Zhang, Yi-Fan Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Pengchuan Zhang, Lei Zhang
This survey is inspired by the remarkable progress in both computer vision and natural language processing, and recent trends shifting from single modality processing to multiple modality comprehension.
16 code implementations • CVPR 2022 • Feng Li, Hao Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang
Our method is universal and can be easily plugged into any DETR-like methods by adding dozens of lines of code to achieve a remarkable improvement.
4 code implementations • 10 Apr 2019 • Yaqing Wang, Quanming Yao, James Kwok, Lionel M. Ni
Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small.
no code implementations • 8 Mar 2019 • Yaqing Wang, James T. Kwok, Lionel M. Ni
However, existing CSC methods can only model noises from Gaussian distribution, which is restrictive and unrealistic.
no code implementations • ICML 2018 • Yaqing Wang, Quanming Yao, James T. Kwok, Lionel M. Ni
Convolutional sparse coding (CSC) has been popularly used for the learning of shift-invariant dictionaries in image and signal processing.
no code implementations • 21 Jun 2017 • Yaqing Wang, Quanming Yao, James T. Kwok, Lionel M. Ni
Convolutional sparse coding (CSC) improves sparse coding by learning a shift-invariant dictionary from the data.