1 code implementation • 11 Jan 2025 • Xiangru Tang, Tianyu Hu, Muyang Ye, Yanjun Shao, Xunjian Yin, Siru Ouyang, Wangchunshu Zhou, Pan Lu, Zhuosheng Zhang, Yilun Zhao, Arman Cohan, Mark Gerstein
To address these challenges, we present ChemAgent, a novel framework designed to improve the performance of LLMs through a dynamic, self-updating library.
no code implementations • 2 Oct 2024 • Kangsheng Wang, Xiao Zhang, Hao liu, Songde Han, Huimin Ma, Tianyu Hu
Large language models (LLMs) have demonstrated limitations in handling combinatorial optimization problems involving long-range reasoning, partially due to causal hallucinations and huge search space.
no code implementations • 20 Sep 2024 • Kangsheng Wang, Xiao Zhang, Zizheng Guo, Tianyu Hu, Huimin Ma
Chain-based reasoning methods like chain of thought (CoT) play a rising role in solving reasoning tasks for large language models (LLMs).
no code implementations • 26 Jun 2024 • Yixin Jin, Wenjing Zhou, Meiqi Wang, Meng Li, Xintao Li, Tianyu Hu, Xingyuan Bu
This paper examines an online multi-task learning (OMTL) method, which processes data sequentially to predict labels across related tasks.
no code implementations • 19 May 2023 • Hao liu, Huimin Ma, Tianyu Hu
In this paper, a Graph-attentive Frequency-enhanced Spatial-Temporal Wind Speed Forecasting model based on graph attention and frequency-enhanced mechanisms, i. e., GFST-WSF, is proposed to improve the accuracy of short-term wind speed forecasting.
1 code implementation • 8 Feb 2023 • Kun Song, Yuchen Wu, Jiansheng Chen, Tianyu Hu, Huimin Ma
Due to the scarcity of available data, deep learning does not perform well on few-shot learning tasks.