1 code implementation • 21 Feb 2024 • Zhen Tan, Alimohammad Beigi, Song Wang, Ruocheng Guo, Amrita Bhattacharjee, Bohan Jiang, Mansooreh Karami, Jundong Li, Lu Cheng, Huan Liu
Furthermore, the paper includes an in-depth taxonomy of methodologies employing LLMs for data annotation, a comprehensive review of learning strategies for models incorporating LLM-generated annotations, and a detailed discussion on primary challenges and limitations associated with using LLMs for data annotation.
1 code implementation • 23 Dec 2021 • Zhen Tan, Kaize Ding, Ruocheng Guo, Huan Liu
The ability to incrementally learn new classes is vital to all real-world artificial intelligence systems.
1 code implementation • 11 Dec 2022 • Zhen Tan, Song Wang, Kaize Ding, Jundong Li, Huan Liu
More recently, inspired by the development of graph self-supervised learning, transferring pretrained node embeddings for few-shot node classification could be a promising alternative to meta-learning but remains unexposed.
1 code implementation • 22 Feb 2024 • Lichi Li, Zainul Abi Din, Zhen Tan, Sam London, Tianlong Chen, Ajay Daptardar
In the evolving e-commerce field, recommendation systems crucially shape user experience and engagement.
1 code implementation • 27 Jun 2023 • Song Wang, Zhen Tan, Huan Liu, Jundong Li
First, we propose to enhance the intra-class generalizability by involving a contrastive two-step optimization in each episode to explicitly align node embeddings in the same classes.
1 code implementation • 20 Feb 2024 • Zhen Tan, Chengshuai Zhao, Raha Moraffah, YiFan Li, Yu Kong, Tianlong Chen, Huan Liu
Unlike direct harmful output generation for MLLMs, our research demonstrates how a single MLLM agent can be subtly influenced to generate prompts that, in turn, induce other MLLM agents in the society to output malicious content.
1 code implementation • 25 May 2020 • Weixin Zeng, Xiang Zhao, Wei Wang, Jiuyang Tang, Zhen Tan
Entity alignment (EA) is to discover equivalent entities in knowledge graphs (KGs), which bridges heterogeneous sources of information and facilitates the integration of knowledge.
2 code implementations • 22 Dec 2023 • Zhen Tan, Tianlong Chen, Zhenyu Zhang, Huan Liu
Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains.
1 code implementation • 8 Nov 2023 • Zhen Tan, Lu Cheng, Song Wang, Yuan Bo, Jundong Li, Huan Liu
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
no code implementations • AAAI-2019 2019 • Zhen Tan, Xiang Zhao, Wei Wang, Weidong Xiao
Triplets extraction is an essential and pivotal step in automatic knowledge base construction, which captures structural information from unstructured text corpus.
no code implementations • LREC 2020 • Weixin Zeng, Xiang Zhao, Jiuyang Tang, Zhen Tan, Xuqian Huang
Moreover, we devise a measure to evaluate the difficulty of documents with respect to entity linking, which is then used to characterize the corpus.
no code implementations • COLING 2020 • Peixin Huang, Xiang Zhao, Ryuichi Takanobu, Zhen Tan, Weidong Xiao
Most existing work on event extraction (EE) either follows a pipelined manner or uses a joint structure but is pipelined in essence.
no code implementations • CoNLL (EMNLP) 2021 • Junxing Wang, Xinyi Li, Zhen Tan, Xiang Zhao, Weidong Xiao
A bidirectional attention mechanism is applied between the question sequence and the paths that connect entities, which provides us with transparent interpretability.
no code implementations • 29 Mar 2022 • Zhen Tan, Kaize Ding, Ruocheng Guo, Huan Liu
Graphs are present in many real-world applications, such as financial fraud detection, commercial recommendation, and social network analysis.
no code implementations • 9 Jun 2023 • Zhen Tan, Ruocheng Guo, Kaize Ding, Huan Liu
Our approach utilizes a pretrained graph transformer as the encoder and injects virtual nodes as soft prompts in the embedding space, which can be optimized with few-shot labels in novel classes to modulate node embeddings for each specific FSNC task.
no code implementations • 14 Jun 2023 • Hirthik Mathavan, Zhen Tan, Nivedh Mudiam, Huan Liu
Meta-learning has emerged as a powerful training strategy for few-shot node classification, demonstrating its effectiveness in the transductive setting.
no code implementations • 25 Sep 2023 • Bohan Jiang, Zhen Tan, Ayushi Nirmal, Huan Liu
A holistic exploration for the formation and detection of disinformation is conducted to foster this line of research.
no code implementations • 2 Nov 2023 • Song Wang, Zhen Tan, Ruocheng Guo, Jundong Li
Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained Language Models (PLMs) have achieved substantial advancements in the field of natural language processing.
no code implementations • 20 Nov 2023 • YiFan Li, Zhen Tan, Kai Shu, Zongsheng Cao, Yu Kong, Huan Liu
Graph Neural Networks (GNNs) have emerged as a powerful tool for representation learning on graphs, but they often suffer from overfitting and label noise issues, especially when the data is scarce or imbalanced.
no code implementations • 28 Jan 2024 • Dawei Li, Zhen Tan, Tianlong Chen, Huan Liu
While textual information significantly enhances the performance of pre-trained language models (PLMs) in knowledge graph completion (KGC), the static and noisy nature of existing corpora collected from Wikipedia articles or synsets definitions often limits the potential of PLM-based KGC models.
no code implementations • 2 Mar 2024 • Song Wang, Zhen Tan, Xinyu Zhao, Tianlong Chen, Huan Liu, Jundong Li
In contrast, in this work, we propose a novel self-conditioned graph generation framework designed to explicitly model graph distributions and employ these distributions to guide the generation process.
no code implementations • 6 Mar 2024 • Bohan Jiang, Lu Cheng, Zhen Tan, Ruocheng Guo, Huan Liu
News media has been utilized as a political tool to stray from facts, presenting biased claims without evidence.
no code implementations • 8 Mar 2024 • Zhen Tan, Jie Peng, Tianlong Chen, Huan Liu
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks through few-shot or zero-shot prompting, bypassing the need for parameter tuning.
no code implementations • 11 Mar 2024 • Chi-Yang Hsu, Kyle Cox, Jiawei Xu, Zhen Tan, Tianhua Zhai, Mengzhou Hu, Dexter Pratt, Tianlong Chen, Ziniu Hu, Ying Ding
We present the Thought Graph as a novel framework to support complex reasoning and use gene set analysis as an example to uncover semantic relationships between biological processes.