Search Results for author: Linmei Hu

Found 18 papers, 5 papers with code

Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information

no code implementations3 Sep 2024 Xinyu Zhang, Linmei Hu, Luhao Zhang, Dandan song, Heyan Huang, Liqiang Nie

In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.

Large Language Model Sequential Recommendation

SeaKR: Self-aware Knowledge Retrieval for Adaptive Retrieval Augmented Generation

1 code implementation27 Jun 2024 Zijun Yao, Weijian Qi, Liangming Pan, Shulin Cao, Linmei Hu, Weichuan Liu, Lei Hou, Juanzi Li

This paper introduces Self-aware Knowledge Retrieval (SeaKR), a novel adaptive RAG model that extracts self-aware uncertainty of LLMs from their internal states.

Question Answering RAG +1

KB-Plugin: A Plug-and-play Framework for Large Language Models to Induce Programs over Low-resourced Knowledge Bases

1 code implementation2 Feb 2024 Jiajie Zhang, Shulin Cao, Linmei Hu, Ling Feng, Lei Hou, Juanzi Li

Secondly, KB-Plugin utilizes abundant annotated data from a rich-resourced KB to train another pluggable module, namely PI plugin, which can help the LLM extract question-relevant schema information from the schema plugin of any KB and utilize this information to induce programs over this KB.

Program induction Self-Supervised Learning

Valley: Video Assistant with Large Language model Enhanced abilitY

1 code implementation12 Jun 2023 Ruipu Luo, Ziwang Zhao, Min Yang, Junwei DOng, Da Li, Pengcheng Lu, Tao Wang, Linmei Hu, Minghui Qiu, Zhongyu Wei

Large language models (LLMs), with their remarkable conversational capabilities, have demonstrated impressive performance across various applications and have emerged as formidable AI assistants.

Action Recognition Instruction Following +4

Enhancing Human Capabilities through Symbiotic Artificial Intelligence with Shared Sensory Experiences

no code implementations26 May 2023 Rui Hao, Dianbo Liu, Linmei Hu

In this paper, we introduce a novel concept in Human-AI interaction called Symbiotic Artificial Intelligence with Shared Sensory Experiences (SAISSE), which aims to establish a mutually beneficial relationship between AI systems and human users through shared sensory experiences.

ChatLLM Network: More brains, More intelligence

no code implementations24 Apr 2023 Rui Hao, Linmei Hu, Weijian Qi, Qingliu Wu, Yirui Zhang, Liqiang Nie

Dialogue-based language models mark a huge milestone in the field of artificial intelligence, by their impressive ability to interact with users, as well as a series of challenging tasks prompted by customized instructions.

Decision Making

Multimodal Matching-aware Co-attention Networks with Mutual Knowledge Distillation for Fake News Detection

no code implementations12 Dec 2022 Linmei Hu, Ziwang Zhao, Weijian Qi, Xuemeng Song, Liqiang Nie

Additionally, based on the designed image-text matching-aware co-attention mechanism, we propose to build two co-attention networks respectively centered on text and image for mutual knowledge distillation to improve fake news detection.

Fake News Detection Image-text matching +2

A Survey of Knowledge Enhanced Pre-trained Language Models

no code implementations11 Nov 2022 Linmei Hu, Zeyi Liu, Ziwang Zhao, Lei Hou, Liqiang Nie, Juanzi Li

We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP.

Natural Language Understanding Retrieval +3

Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model

no code implementations16 Jul 2022 Xiaolin Chen, Xuemeng Song, Liqiang Jing, Shuo Li, Linmei Hu, Liqiang Nie

To address these limitations, we propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD), consisting of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.

Decoder Language Modelling +1

Graph Neural News Recommendation with Unsupervised Preference Disentanglement

1 code implementation ACL 2020 Linmei Hu, Siyong Xu, Chen Li, Cheng Yang, Chuan Shi, Nan Duan, Xing Xie, Ming Zhou

Furthermore, the learned representations are disentangled with latent preference factors by a neighborhood routing algorithm, which can enhance expressiveness and interpretability.

Disentanglement News Recommendation

Graph Neural News Recommendation with Long-term and Short-term Interest Modeling

no code implementations30 Oct 2019 Linmei Hu, Chen Li, Chuan Shi, Cheng Yang, Chao Shao

Existing methods on news recommendation mainly include collaborative filtering methods which rely on direct user-item interactions and content based methods which characterize the content of user reading history.

Collaborative Filtering News Recommendation +1

Relation Structure-Aware Heterogeneous Information Network Embedding

no code implementations15 May 2019 Yuanfu Lu, Chuan Shi, Linmei Hu, Zhiyuan Liu

In this paper, we take the structural characteristics of heterogeneous relations into consideration and propose a novel Relation structure-aware Heterogeneous Information Network Embedding model (RHINE).

Clustering Link Prediction +4

Cannot find the paper you are looking for? You can Submit a new open access paper.