no code implementations • 19 Nov 2023 • Xuxin Cheng, Bowen Cao, Qichen Ye, Zhihong Zhu, Hongxiang Li, Yuexian Zou
Specifically, in fine-tuning, we apply mutual learning and train two SLU models on the manual transcripts and the ASR transcripts, respectively, aiming to iteratively share knowledge between these two models.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
no code implementations • 7 Nov 2023 • Peilin Zhou, Meng Cao, You-Liang Huang, Qichen Ye, Peiyan Zhang, Junling Liu, Yueqi Xie, Yining Hua, Jaeboum Kim
Large Multimodal Models (LMMs) have demonstrated impressive performance across various vision and language tasks, yet their potential applications in recommendation tasks with visual assistance remain unexplored.
1 code implementation • 27 Oct 2023 • Junling Liu, ZiMing Wang, Qichen Ye, Dading Chong, Peilin Zhou, Yining Hua
This method enhances the model's ability to generate medical captions and answer complex medical queries.
1 code implementation • 13 Oct 2023 • Qichen Ye, Junling Liu, Dading Chong, Peilin Zhou, Yining Hua, Fenglin Liu, Meng Cao, ZiMing Wang, Xuxin Cheng, Zhu Lei, Zhenhua Guo
In the CPT and SFT phases, Qilin-Med achieved 38. 4% and 40. 0% accuracy on the CMExam test set, respectively.
1 code implementation • 23 Aug 2023 • Junling Liu, Chao Liu, Peilin Zhou, Qichen Ye, Dading Chong, Kang Zhou, Yueqi Xie, Yuwei Cao, Shoujin Wang, Chenyu You, Philip S. Yu
The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.
1 code implementation • 18 Aug 2023 • Peilin Zhou, Qichen Ye, Yueqi Xie, Jingqi Gao, Shoujin Wang, Jae Boum Kim, Chenyu You, Sunghun Kim
Our empirical analysis of some representative Transformer-based SR models reveals that it is not uncommon for large attention weights to be assigned to less relevant items, which can result in inaccurate recommendations.
1 code implementation • 28 Feb 2023 • Yueqi Xie, Jingqi Gao, Peilin Zhou, Qichen Ye, Yining Hua, Jaeboum Kim, Fangzhao Wu, Sunghun Kim
To address these issues, we propose the REMI framework, consisting of an Interest-aware Hard Negative mining strategy (IHN) and a Routing Regularization (RR) method.
1 code implementation • 23 Feb 2023 • Bowen Cao, Qichen Ye, Weiyuan Xu, Yuexian Zou
Existing neighborhood aggregation strategies fail to capture either the short-term features or the long-term features of temporal graph attributes, leading to unsatisfactory model performance and even poor robustness and domain generality of the representation learning method.
1 code implementation • 23 Feb 2023 • Qichen Ye, Bowen Cao, Nuo Chen, Weiyuan Xu, Yuexian Zou
Despite the promising result of recent KAQA systems which tend to integrate linguistic knowledge from pre-trained language models (PLM) and factual knowledge from knowledge graphs (KG) to answer complex questions, a bottleneck exists in effectively fusing the representations from PLMs and KGs because of (i) the semantic and distributional gaps between them, and (ii) the difficulties in joint reasoning over the provided knowledge from both modalities.
1 code implementation • 10 Nov 2022 • Peilin Zhou, Jingqi Gao, Yueqi Xie, Qichen Ye, Yining Hua, Jae Boum Kim, Shoujin Wang, Sunghun Kim
Therefore, we propose Equivariant Contrastive Learning for Sequential Recommendation (ECL-SR), which endows SR models with great discriminative power, making the learned user behavior representations sensitive to invasive augmentations (e. g., item substitution) and insensitive to mild augmentations (e. g., featurelevel dropout masking).