Search Results for author: Guozheng Li

Found 12 papers, 4 papers with code

Fast and Continual Knowledge Graph Embedding via Incremental LoRA

1 code implementation8 Jul 2024 Jiajun Liu, Wenjun Ke, Peng Wang, Jiahao Wang, Jinhua Gao, Ziyu Shang, Guozheng Li, Zijie Xu, Ke Ji, Yining Li

To address this issue, we propose a fast CKGE framework (\model), incorporating an incremental low-rank adapter (\mec) mechanism to efficiently acquire new knowledge while preserving old knowledge.

Knowledge Graph Embedding Knowledge Graphs +1

Towards Continual Knowledge Graph Embedding via Incremental Distillation

1 code implementation7 May 2024 Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu Shang, Jinhua Gao, Guozheng Li, Ke Ji, Yanhe Liu

On the one hand, existing methods usually learn new triples in a random order, destroying the inner structure of new KGs.

Knowledge Graph Embedding

Recall, Retrieve and Reason: Towards Better In-Context Relation Extraction

no code implementations27 Apr 2024 Guozheng Li, Peng Wang, Wenjun Ke, Yikai Guo, Ke Ji, Ziyu Shang, Jiajun Liu, Zijie Xu

On the one hand, retrieving good demonstrations is a non-trivial process in RE, which easily results in low relevance regarding entities and relations.

In-Context Learning Language Modelling +4

Meta In-Context Learning Makes Large Language Models Better Zero and Few-Shot Relation Extractors

no code implementations27 Apr 2024 Guozheng Li, Peng Wang, Jiajun Liu, Yikai Guo, Ke Ji, Ziyu Shang, Zijie Xu

To this end, we introduce \textsc{Micre} (\textbf{M}eta \textbf{I}n-\textbf{C}ontext learning of LLMs for \textbf{R}elation \textbf{E}xtraction), a new meta-training framework for zero and few-shot RE where an LLM is tuned to do ICL on a diverse collection of RE datasets (i. e., learning to learn in context for RE).

Few-Shot Learning In-Context Learning +2

Empirical Analysis of Dialogue Relation Extraction with Large Language Models

no code implementations27 Apr 2024 Guozheng Li, Zijie Xu, Ziyu Shang, Jiajun Liu, Ke Ji, Yikai Guo

However, existing DRE methods still suffer from two serious issues: (1) hard to capture long and sparse multi-turn information, and (2) struggle to extract golden relations based on partial dialogues, which motivates us to discover more effective methods that can alleviate the above issues.

Relation Relation Extraction

Unlocking Instructive In-Context Learning with Tabular Prompting for Relational Triple Extraction

no code implementations21 Feb 2024 Guozheng Li, Wenjun Ke, Peng Wang, Zijie Xu, Ke Ji, Jiajun Liu, Ziyu Shang, Qiqing Luo

The in-context learning (ICL) for relational triple extraction (RTE) has achieved promising performance, but still encounters two key challenges: (1) how to design effective prompts and (2) how to select proper demonstrations.

Blocking In-Context Learning +1

Revisiting Large Language Models as Zero-shot Relation Extractors

no code implementations8 Oct 2023 Guozheng Li, Peng Wang, Wenjun Ke

On the one hand, we analyze the drawbacks of existing RE prompts and attempt to incorporate recent prompt techniques such as chain-of-thought (CoT) to improve zero-shot RE.

Question Answering Relation +1

Balanced Order Batching with Task-Oriented Graph Clustering

no code implementations19 Aug 2020 Lu Duan, Haoyuan Hu, Zili Wu, Guozheng Li, Xinhang Zhang, Yu Gong, Yinghui Xu

In this paper, rather than designing heuristics, we propose an end-to-end learning and optimization framework named Balanced Task-orientated Graph Clustering Network (BTOGCN) to solve the BOBP by reducing it to balanced graph clustering optimization problem.

Clustering Deep Clustering +1

Learning Tree-based Deep Model for Recommender Systems

6 code implementations8 Jan 2018 Han Zhu, Xiang Li, Pengye Zhang, Guozheng Li, Jie He, Han Li, Kun Gai

In systems with large corpus, however, the calculation cost for the learnt model to predict all user-item preferences is tremendous, which makes full corpus retrieval extremely difficult.

Recommendation Systems Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.