Search Results for author: Ke Ji

Found 6 papers, 2 papers with code

Recall, Retrieve and Reason: Towards Better In-Context Relation Extraction

no code implementations27 Apr 2024 Guozheng Li, Peng Wang, Wenjun Ke, Yikai Guo, Ke Ji, Ziyu Shang, Jiajun Liu, Zijie Xu

On the one hand, retrieving good demonstrations is a non-trivial process in RE, which easily results in low relevance regarding entities and relations.

Meta In-Context Learning Makes Large Language Models Better Zero and Few-Shot Relation Extractors

no code implementations27 Apr 2024 Guozheng Li, Peng Wang, Jiajun Liu, Yikai Guo, Ke Ji, Ziyu Shang, Zijie Xu

To this end, we introduce \textsc{Micre} (\textbf{M}eta \textbf{I}n-\textbf{C}ontext learning of LLMs for \textbf{R}elation \textbf{E}xtraction), a new meta-training framework for zero and few-shot RE where an LLM is tuned to do ICL on a diverse collection of RE datasets (i. e., learning to learn in context for RE).

Empirical Analysis of Dialogue Relation Extraction with Large Language Models

no code implementations27 Apr 2024 Guozheng Li, Zijie Xu, Ziyu Shang, Jiajun Liu, Ke Ji, Yikai Guo

However, existing DRE methods still suffer from two serious issues: (1) hard to capture long and sparse multi-turn information, and (2) struggle to extract golden relations based on partial dialogues, which motivates us to discover more effective methods that can alleviate the above issues.

Unlocking Instructive In-Context Learning with Tabular Prompting for Relational Triple Extraction

no code implementations21 Feb 2024 Guozheng Li, Wenjun Ke, Peng Wang, Zijie Xu, Ke Ji, Jiajun Liu, Ziyu Shang, Qiqing Luo

The in-context learning (ICL) for relational triple extraction (RTE) has achieved promising performance, but still encounters two key challenges: (1) how to design effective prompts and (2) how to select proper demonstrations.

Blocking In-Context Learning +1

LAMM: Label Alignment for Multi-Modal Prompt Learning

1 code implementation13 Dec 2023 Jingsheng Gao, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu, Yuzhuo Fu

We conduct experiments on 11 downstream vision datasets and demonstrate that our method significantly improves the performance of existing multi-modal prompt learning models in few-shot scenarios, exhibiting an average accuracy improvement of 2. 31(\%) compared to the state-of-the-art methods on 16 shots.

Continual Learning

Hierarchical Verbalizer for Few-Shot Hierarchical Text Classification

1 code implementation26 May 2023 Ke Ji, Yixin Lian, Jingsheng Gao, Baoyuan Wang

Due to the complex label hierarchy and intensive labeling cost in practice, the hierarchical text classification (HTC) suffers a poor performance especially when low-resource or few-shot settings are considered.

Contrastive Learning Few-shot HTC +2

Cannot find the paper you are looking for? You can Submit a new open access paper.