no code implementations • 19 Feb 2024 • Zhihao Wen, Jie Zhang, Yuan Fang
Fine-tuning all parameters of large language models (LLMs) necessitates substantial computational power and extended time.
no code implementations • 2 Feb 2024 • Xingtong Yu, Yuan Fang, Zemin Liu, Yuxia Wu, Zhihao Wen, Jianyuan Bo, Xinming Zhang, Steven C. H. Hoi
Finally, we outline prospective future directions for few-shot learning on graphs to catalyze continued innovation in this field.
no code implementations • 19 Aug 2023 • Zhihao Wen, Yuan Fang, Yihan Liu, Yang Guo, Shuji Hao
We design a novel graph prompting function to reformulate the downstream task into a similar template as the pretext task in pre-training, thereby narrowing the objective gap.
1 code implementation • 15 Jul 2023 • Zhihao Wen, Yuan Fang
During pre-training, we propose three graph interaction-based contrastive strategies to jointly pre-train a graph-text model; during downstream classification, we explore handcrafted discrete prompts and continuous prompt tuning for the jointly pre-trained model to achieve zero- and few-shot classification, respectively.
no code implementations • 5 May 2023 • Zhihao Wen, Yuan Fang
Text classification is a fundamental problem in information retrieval with many real-world applications, such as predicting the topics of online articles and the categories of e-commerce product descriptions.
no code implementations • 14 May 2021 • Zhihao Wen, Yuan Fang, Zemin Liu
That is, MI-GNN does not directly learn an inductive model; it learns the general knowledge of how to train a model for semi-supervised node classification on new graphs.