no code implementations • 9 Apr 2024 • YanJie Li, Weijun Li, Lina Yu, Min Wu, Jingyi Liu, Wenqiang Li, Meilan Hao, Shu Wei, Yusong Deng
However, its performance is very dependent on the training data and performs poorly on data outside the training set, which leads to poor noise robustness and Versatility of such methods.
no code implementations • 28 Feb 2024 • YanJie Li, Jingyi Liu, Weijun Li, Lina Yu, Min Wu, Wenqiang Li, Meilan Hao, Su Wei, Yusong Deng
The SR problem is solved as a pure multimodal problem, and contrastive learning is also introduced in the training process for modal alignment to facilitate later modal feature fusion.
1 code implementation • 25 Jan 2024 • Min Wu, Weijun Li, Lina Yu, Wenqiang Li, Jingyi Liu, YanJie Li, Meilan Hao
Therefore, a greedy pruning algorithm is proposed to prune the network into a subnetwork while ensuring the accuracy of data fitting.
no code implementations • 24 Jan 2024 • YanJie Li, Weijun Li, Lina Yu, Min Wu, Jingyi Liu, Wenqiang Li, Meilan Hao, Shu Wei, Yusong Deng
To optimize the trade-off between efficiency and versatility, we introduce SR-GPT, a novel algorithm for symbolic regression that integrates Monte Carlo Tree Search (MCTS) with a Generative Pre-Trained Transformer (GPT).
no code implementations • 3 Jan 2024 • YanJie Li, Weijun Li, Lina Yu, Min Wu, Jinyi Liu, Wenqiang Li, Meilan Hao
1, The type of activation function is single and relatively fixed, which leads to poor "unit representation ability" of the network, and it is often used to solve simple problems with very complex networks; 2, the network structure is not adaptive, it is easy to cause network structure redundant or insufficient.
no code implementations • 13 Nov 2023 • YanJie Li, Weijun Li, Lina Yu, Min Wu, Jinyi Liu, Wenqiang Li, Meilan Hao, Shu Wei, Yusong Deng
To address these issues, we propose MetaSymNet, a novel neural network that dynamically adjusts its structure in real-time, allowing for both expansion and contraction.
no code implementations • 24 Sep 2023 • Wenqiang Li, Weijun Li, Lina Yu, Min Wu, Jingyi Liu, YanJie Li
Instead of searching for expressions within a large search space, we explore DySymNet with various structures and optimize them to identify expressions that better-fitting the data.
no code implementations • 8 Apr 2021 • Wenqiang Li, YM Tang, Ziyang Wang, KM Yu, Sandy To
The proposed model is based on the structure of encoder to decoder, using layer normalization to optimize mini-batch training performance.
no code implementations • 6 May 2020 • Juan Li, Wenqiang Li, Gechun Liang
This paper studies an optimal forward investment problem in an incomplete market with model uncertainty, in which the underlying stocks depend on the correlated stochastic factors.