no code implementations • 28 May 2024 • Xiaocheng Yang, Bingsen Chen, Yik-Cheung Tam
We hypothesize that an LLM should focus on extracting predicates and generating symbolic formulas from the math problem description so that the underlying calculation can be done via an external code interpreter.
no code implementations • 7 Sep 2023 • Xiaocheng Yang, Yik-Cheung Tam
Consequently, we employ chain-of-thought to fine-tune LLaMA7B as a baseline model and develop other fine-tuned LLaMA7B models for the generation of Prolog code, Prolog code + chain-of-thought, and chain-of-thought + Prolog code, respectively.
2 code implementations • 6 Jul 2022 • Xiaocheng Yang, Mingyu Yan, Shirui Pan, Xiaochun Ye, Dongrui Fan
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
no code implementations • 18 Apr 2022 • Haiyang Lin, Mingyu Yan, Xiaocheng Yang, Mo Zou, WenMing Li, Xiaochun Ye, Dongrui Fan
Graph neural network (GNN) has been demonstrated to be a powerful model in many domains for its effectiveness in learning over graphs.
no code implementations • 9 Aug 2021 • Mingfeng Jiang, Minghao Zhi, Liying Wei, Xiaocheng Yang, Jucheng Zhang, Yongming Li, Pin Wang, Jiahao Huang, Guang Yang
High-resolution magnetic resonance images can provide fine-grained anatomical information, but acquiring such data requires a long scanning time.