Search Results for author: Fengyu Cai

Found 6 papers, 4 papers with code

$\textit{GeoHard}$: Towards Measuring Class-wise Hardness through Modelling Class Semantics

no code implementations17 Jul 2024 Fengyu Cai, Xinran Zhao, Hongming Zhang, Iryna Gurevych, Heinz Koeppl

Recent advances in measuring hardness-wise properties of data guide language models in sample selection within low-resource scenarios.

Natural Language Understanding

$\texttt{MixGR}$: Enhancing Retriever Generalization for Scientific Domain through Complementary Granularity

1 code implementation15 Jul 2024 Fengyu Cai, Xinran Zhao, Tong Chen, Sihao Chen, Hongming Zhang, Iryna Gurevych, Heinz Koeppl

Recent studies show the growing significance of document retrieval in the generation of LLMs, i. e., RAG, within the scientific domain by bridging their knowledge gap.

Question Answering RAG +1

Finetuning Large Language Model for Personalized Ranking

1 code implementation25 May 2024 Zhuoxi Bai, Ning Wu, Fengyu Cai, Xinyi Zhu, Yun Xiong

Large Language Models (LLMs) have demonstrated remarkable performance across various domains, motivating researchers to investigate their potential use in recommendation systems.

Explainable Recommendation Language Modeling +3

A Survey of Confidence Estimation and Calibration in Large Language Models

no code implementations14 Nov 2023 Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, Iryna Gurevych

Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations.

Language Modelling

Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems

1 code implementation EMNLP 2021 Fei Mi, Wanhao Zhou, Fengyu Cai, Lingjing Kong, Minlie Huang, Boi Faltings

In this paper, we devise a self-training approach to utilize the abundant unlabeled dialog data to further improve state-of-the-art pre-trained models in few-shot learning scenarios for ToD systems.

dialog state tracking Few-Shot Learning +5

SLIM: Explicit Slot-Intent Mapping with BERT for Joint Multi-Intent Detection and Slot Filling

1 code implementation26 Aug 2021 Fengyu Cai, Wanhao Zhou, Fei Mi, Boi Faltings

Utterance-level intent detection and token-level slot filling are two key tasks for natural language understanding (NLU) in task-oriented systems.

Intent Detection Natural Language Understanding +3

Cannot find the paper you are looking for? You can Submit a new open access paper.