no code implementations • 18 Aug 2023 • Shuhui Wu, Zengming Tang, Zongyi Guo, Weiwei Zhang, Baoliang Cui, Haihong Tang, Weiming Lu
Simultaneously, we utilize open-domain datasets during training to improve the performance of PUMGPT and its generalization ability.
no code implementations • 6 Oct 2022 • Tao Chen, Luxin Liu, Xuepeng Jia, Baoliang Cui, Haihong Tang, Siliang Tang
Specifically, we borrow recent prompt-based language models as the knowledge expert to yield initial seed rules, and based on the formed high-quality instance pool that acts as an intermediary role, we keep teaching the expert to fit our task and learning task-specific logical rules.