Search Results for author: Lu Fan

Found 14 papers, 6 papers with code

A Closer Look at Few-Shot Out-of-Distribution Intent Detection

1 code implementation COLING 2022 Li-Ming Zhan, Haowen Liang, Lu Fan, Xiao-Ming Wu, Albert Y.S. Lam

Comprehensive experiments on three real-world intent detection benchmark datasets demonstrate the high effectiveness of our proposed approach and its great potential in improving state-of-the-art methods for few-shot OOD intent detection.

Intent Detection Task-Oriented Dialogue Systems

STORE: Streamlining Semantic Tokenization and Generative Recommendation with A Single LLM

no code implementations11 Sep 2024 Qijiong Liu, Jieming Zhu, Lu Fan, Zhou Zhao, Xiao-Ming Wu

In this paper, we propose to streamline the semantic tokenization and generative recommendation process with a unified framework, dubbed STORE, which leverages a single large language model (LLM) for both tasks.

Language Modelling Large Language Model +1

Do self-supervised speech and language models extract similar representations as human brain?

no code implementations7 Oct 2023 Peili Chen, Linyang He, Li Fu, Lu Fan, Edward F. Chang, Yuanning Li

Speech and language models trained through self-supervised learning (SSL) demonstrate strong alignment with brain activity during speech and language perception.

Self-Supervised Learning

Leveraging Label Information for Multimodal Emotion Recognition

no code implementations5 Sep 2023 Peiying Wang, Sunlu Zeng, Junqing Chen, Lu Fan, Meng Chen, Youzheng Wu, Xiaodong He

Finally, we devise a novel label-guided attentive fusion module to fuse the label-aware text and speech representations for emotion classification.

Emotion Classification Multimodal Emotion Recognition

Neighborhood-based Hard Negative Mining for Sequential Recommendation

1 code implementation12 Jun 2023 Lu Fan, Jiashu Pu, Rongsheng Zhang, Xiao-Ming Wu

Motivated by this observation, we propose a Graph-based Negative sampling approach based on Neighborhood Overlap (GNNO) to exploit structural information hidden in user behaviors for negative mining.

Sequential Recommendation

Multi-modal Pre-training for Medical Vision-language Understanding and Generation: An Empirical Study with A New Benchmark

1 code implementation10 Jun 2023 Li Xu, Bo Liu, Ameer Hamza Khan, Lu Fan, Xiao-Ming Wu

With the availability of large-scale, comprehensive, and general-purpose vision-language (VL) datasets such as MSCOCO, vision-language pre-training (VLP) has become an active area of research and proven to be effective for various VL tasks such as visual-question answering.

Image-text Retrieval Medical Report Generation +3

UFO2: A unified pre-training framework for online and offline speech recognition

no code implementations26 Oct 2022 Li Fu, Siqi Li, Qingtao Li, Liping Deng, Fangzhu Li, Lu Fan, Meng Chen, Xiaodong He

In this paper, we propose a Unified pre-training Framework for Online and Offline (UFO2) Automatic Speech Recognition (ASR), which 1) simplifies the two separate training workflows for online and offline modes into one process, and 2) improves the Word Error Rate (WER) performance with limited utterance annotating.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

SCaLa: Supervised Contrastive Learning for End-to-End Speech Recognition

no code implementations8 Oct 2021 Li Fu, Xiaoxiao Li, Runyu Wang, Lu Fan, Zhengchen Zhang, Meng Chen, Youzheng Wu, Xiaodong He

End-to-end Automatic Speech Recognition (ASR) models are usually trained to optimize the loss of the whole token sequence, while neglecting explicit phonemic-granularity supervision.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Out-of-Scope Intent Detection with Self-Supervision and Discriminative Training

no code implementations ACL 2021 Li-Ming Zhan, Haowen Liang, Bo Liu, Lu Fan, Xiao-Ming Wu, Albert Y. S. Lam

Since the distribution of outlier utterances is arbitrary and unknown in the training stage, existing methods commonly rely on strong assumptions on data distribution such as mixture of Gaussians to make inference, resulting in either complex multi-step training procedures or hand-crafted rules such as confidence threshold selection for outlier detection.

Intent Detection Outlier Detection +1

Reconstructing Capsule Networks for Zero-shot Intent Classification

1 code implementation IJCNLP 2019 Han Liu, Xiaotong Zhang, Lu Fan, Xu Fu, i, Qimai Li, Xiao-Ming Wu, Albert Y. S. Lam

With the burgeoning of conversational AI, existing systems are not capable of handling numerous fast-emerging intents, which motivates zero-shot intent classification.

Classification General Classification +3

N2VSCDNNR: A Local Recommender System Based on Node2vec and Rich Information Network

no code implementations12 Apr 2019 Jinyin Chen, Yangyang Wu, Lu Fan, Xiang Lin, Haibin Zheng, Shanqing Yu, Qi Xuan

In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network.

Clustering Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.