Search Results for author: Xiaohan Xu

Found 7 papers, 6 papers with code

A Survey on Knowledge Distillation of Large Language Models

1 code implementation20 Feb 2024 Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, DaCheng Tao, Tianyi Zhou

In the era of Large Language Models (LLMs), Knowledge Distillation (KD) emerges as a pivotal methodology for transferring advanced capabilities from leading proprietary LLMs, such as GPT-4, to their open-source counterparts like LLaMA and Mistral.

Data Augmentation Knowledge Distillation +1

Leveraging Large Language Models for NLG Evaluation: A Survey

1 code implementation13 Jan 2024 Zhen Li, Xiaohan Xu, Tao Shen, Can Xu, Jia-Chen Gu, Chongyang Tao

In the rapidly evolving domain of Natural Language Generation (NLG) evaluation, introducing Large Language Models (LLMs) has opened new avenues for assessing generated content quality, e. g., coherence, creativity, and context relevance.

nlg evaluation Specificity +1

Re-Reading Improves Reasoning in Large Language Models

1 code implementation12 Sep 2023 Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-Guang Lou

To enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs), we introduce a simple, yet general and effective prompting method, Re2, i. e., \textbf{Re}-\textbf{Re}ading the question as input.

Cross-modal Contrastive Learning for Multimodal Fake News Detection

1 code implementation25 Feb 2023 Longzheng Wang, Chuang Zhang, Hongbo Xu, Yongxiu Xu, Xiaohan Xu, Siqi Wang

An attention mechanism with an attention guidance module is implemented to help effectively and interpretably aggregate the aligned unimodal representations and the cross-modality correlations.

Contrastive Learning Fake News Detection +1

PoKE: Prior Knowledge Enhanced Emotional Support Conversation with Latent Variable

no code implementations23 Oct 2022 Xiaohan Xu, Xuying Meng, Yequan Wang

Further experiments prove that abundant prior knowledge is conducive to high-quality emotional support, and a well-learned latent variable is critical to the diversity of generations.

Subgraph Neighboring Relations Infomax for Inductive Link Prediction on Knowledge Graphs

1 code implementation28 Jul 2022 Xiaohan Xu, Peng Zhang, Yongquan He, Chengpeng Chao, Chaoyang Yan

Inductive link prediction for knowledge graph aims at predicting missing links between unseen entities, those not shown in training stage.

Inductive Link Prediction Knowledge Graphs

Cannot find the paper you are looking for? You can Submit a new open access paper.