no code implementations • 12 Jul 2024 • Ke Ji, Peng Wang, Wenjun Ke, Guozheng Li, Jiajun Liu, Jingsheng Gao, Ziyu Shang
The main challenge is how to transfer the unstructured semantic space in PLMs to the downstream domain hierarchy.
1 code implementation • 8 Jul 2024 • Jiajun Liu, Wenjun Ke, Peng Wang, Jiahao Wang, Jinhua Gao, Ziyu Shang, Guozheng Li, Zijie Xu, Ke Ji, Yining Li
To address this issue, we propose a fast CKGE framework (\model), incorporating an incremental low-rank adapter (\mec) mechanism to efficiently acquire new knowledge while preserving old knowledge.
1 code implementation • 7 May 2024 • Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu Shang, Jinhua Gao, Guozheng Li, Ke Ji, Yanhe Liu
On the one hand, existing methods usually learn new triples in a random order, destroying the inner structure of new KGs.
no code implementations • 27 Apr 2024 • Guozheng Li, Peng Wang, Wenjun Ke, Yikai Guo, Ke Ji, Ziyu Shang, Jiajun Liu, Zijie Xu
On the one hand, retrieving good demonstrations is a non-trivial process in RE, which easily results in low relevance regarding entities and relations.
no code implementations • 27 Apr 2024 • Guozheng Li, Peng Wang, Jiajun Liu, Yikai Guo, Ke Ji, Ziyu Shang, Zijie Xu
To this end, we introduce \textsc{Micre} (\textbf{M}eta \textbf{I}n-\textbf{C}ontext learning of LLMs for \textbf{R}elation \textbf{E}xtraction), a new meta-training framework for zero and few-shot RE where an LLM is tuned to do ICL on a diverse collection of RE datasets (i. e., learning to learn in context for RE).
no code implementations • 27 Apr 2024 • Guozheng Li, Zijie Xu, Ziyu Shang, Jiajun Liu, Ke Ji, Yikai Guo
However, existing DRE methods still suffer from two serious issues: (1) hard to capture long and sparse multi-turn information, and (2) struggle to extract golden relations based on partial dialogues, which motivates us to discover more effective methods that can alleviate the above issues.
no code implementations • 21 Feb 2024 • Guozheng Li, Wenjun Ke, Peng Wang, Zijie Xu, Ke Ji, Jiajun Liu, Ziyu Shang, Qiqing Luo
The in-context learning (ICL) for relational triple extraction (RTE) has achieved promising performance, but still encounters two key challenges: (1) how to design effective prompts and (2) how to select proper demonstrations.
no code implementations • 8 Oct 2023 • Guozheng Li, Peng Wang, Wenjun Ke
On the one hand, we analyze the drawbacks of existing RE prompts and attempt to incorporate recent prompt techniques such as chain-of-thought (CoT) to improve zero-shot RE.
1 code implementation • 5 May 2022 • Guozheng Li, Xu Chen, Peng Wang, Jiafeng Xie, Qiqing Luo
Recent work for extracting relations from texts has achieved excellent performance.
no code implementations • 19 Aug 2020 • Lu Duan, Haoyuan Hu, Zili Wu, Guozheng Li, Xinhang Zhang, Yu Gong, Yinghui Xu
In this paper, rather than designing heuristics, we propose an end-to-end learning and optimization framework named Balanced Task-orientated Graph Clustering Network (BTOGCN) to solve the BOBP by reducing it to balanced graph clustering optimization problem.
no code implementations • COLING 2018 • Ziqing Liu, Enwei Peng, Shixing Yan, Guozheng Li, Tianyong Hao
T-Know is a knowledge service system based on the constructed knowledge graph of Traditional Chinese Medicine (TCM).
6 code implementations • 8 Jan 2018 • Han Zhu, Xiang Li, Pengye Zhang, Guozheng Li, Jie He, Han Li, Kun Gai
In systems with large corpus, however, the calculation cost for the learnt model to predict all user-item preferences is tremendous, which makes full corpus retrieval extremely difficult.