Search Results for author: Qichen Ye

Found 10 papers, 8 papers with code

ML-LMCL: Mutual Learning and Large-Margin Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding

no code implementations19 Nov 2023 Xuxin Cheng, Bowen Cao, Qichen Ye, Zhihong Zhu, Hongxiang Li, Yuexian Zou

Specifically, in fine-tuning, we apply mutual learning and train two SLU models on the manual transcripts and the ASR transcripts, respectively, aiming to iteratively share knowledge between these two models.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Exploring Recommendation Capabilities of GPT-4V(ision): A Preliminary Case Study

no code implementations7 Nov 2023 Peilin Zhou, Meng Cao, You-Liang Huang, Qichen Ye, Peiyan Zhang, Junling Liu, Yueqi Xie, Yining Hua, Jaeboum Kim

Large Multimodal Models (LMMs) have demonstrated impressive performance across various vision and language tasks, yet their potential applications in recommendation tasks with visual assistance remain unexplored.

General Knowledge Reading Comprehension

Qilin-Med-VL: Towards Chinese Large Vision-Language Model for General Healthcare

1 code implementation27 Oct 2023 Junling Liu, ZiMing Wang, Qichen Ye, Dading Chong, Peilin Zhou, Yining Hua

This method enhances the model's ability to generate medical captions and answer complex medical queries.

Language Modelling

LLMRec: Benchmarking Large Language Models on Recommendation Task

1 code implementation23 Aug 2023 Junling Liu, Chao Liu, Peilin Zhou, Qichen Ye, Dading Chong, Kang Zhou, Yueqi Xie, Yuwei Cao, Shoujin Wang, Chenyu You, Philip S. Yu

The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.

Benchmarking Explanation Generation +1

Attention Calibration for Transformer-based Sequential Recommendation

1 code implementation18 Aug 2023 Peilin Zhou, Qichen Ye, Yueqi Xie, Jingqi Gao, Shoujin Wang, Jae Boum Kim, Chenyu You, Sunghun Kim

Our empirical analysis of some representative Transformer-based SR models reveals that it is not uncommon for large attention weights to be assigned to less relevant items, which can result in inaccurate recommendations.

Sequential Recommendation

Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems

1 code implementation28 Feb 2023 Yueqi Xie, Jingqi Gao, Peilin Zhou, Qichen Ye, Yining Hua, Jaeboum Kim, Fangzhao Wu, Sunghun Kim

To address these issues, we propose the REMI framework, consisting of an Interest-aware Hard Negative mining strategy (IHN) and a Routing Regularization (RR) method.

Recommendation Systems

FTM: A Frame-level Timeline Modeling Method for Temporal Graph Representation Learning

1 code implementation23 Feb 2023 Bowen Cao, Qichen Ye, Weiyuan Xu, Yuexian Zou

Existing neighborhood aggregation strategies fail to capture either the short-term features or the long-term features of temporal graph attributes, leading to unsatisfactory model performance and even poor robustness and domain generality of the representation learning method.

Graph Representation Learning

FiTs: Fine-grained Two-stage Training for Knowledge-aware Question Answering

1 code implementation23 Feb 2023 Qichen Ye, Bowen Cao, Nuo Chen, Weiyuan Xu, Yuexian Zou

Despite the promising result of recent KAQA systems which tend to integrate linguistic knowledge from pre-trained language models (PLM) and factual knowledge from knowledge graphs (KG) to answer complex questions, a bottleneck exists in effectively fusing the representations from PLMs and KGs because of (i) the semantic and distributional gaps between them, and (ii) the difficulties in joint reasoning over the provided knowledge from both modalities.

Knowledge Graphs Question Answering +1

Equivariant Contrastive Learning for Sequential Recommendation

1 code implementation10 Nov 2022 Peilin Zhou, Jingqi Gao, Yueqi Xie, Qichen Ye, Yining Hua, Jae Boum Kim, Shoujin Wang, Sunghun Kim

Therefore, we propose Equivariant Contrastive Learning for Sequential Recommendation (ECL-SR), which endows SR models with great discriminative power, making the learned user behavior representations sensitive to invasive augmentations (e. g., item substitution) and insensitive to mild augmentations (e. g., featurelevel dropout masking).

Contrastive Learning Data Augmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.