Search Results for author: Chenyi Lei

Found 8 papers, 1 papers with code

Learning Transferable Time Series Classifier with Cross-Domain Pre-training from Language Model

no code implementations19 Mar 2024 Mingyue Cheng, Xiaoyu Tao, Qi Liu, Hao Zhang, Yiheng Chen, Chenyi Lei

To address this challenge, we propose CrossTimeNet, a novel cross-domain SSL learning framework to learn transferable knowledge from various domains to largely benefit the target downstream task.

Language Modelling Time Series +1

Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization

1 code implementation9 Sep 2023 Yang Jin, Kun Xu, Liwei Chen, Chao Liao, Jianchao Tan, Quzhe Huang, Bin Chen, Chenyi Lei, An Liu, Chengru Song, Xiaoqiang Lei, Di Zhang, Wenwu Ou, Kun Gai, Yadong Mu

Specifically, we introduce a well-designed visual tokenizer to translate the non-linguistic image into a sequence of discrete tokens like a foreign language that LLM can read.

Language Modelling Large Language Model +1

Self-Supervised Interest Transfer Network via Prototypical Contrastive Learning for Recommendation

no code implementations28 Feb 2023 Guoqiang Sun, Yibin Shen, Sijin Zhou, Xiang Chen, Hongyan Liu, Chunming Wu, Chenyi Lei, Xianhui Wei, Fei Fang

In this paper, we propose a cross-domain recommendation method: Self-supervised Interest Transfer Network (SITN), which can effectively transfer invariant knowledge between domains via prototypical contrastive learning.

Contrastive Learning

Scenario-Adaptive and Self-Supervised Model for Multi-Scenario Personalized Recommendation

no code implementations24 Aug 2022 Yuanliang Zhang, XiaoFeng Wang, Jinxin Hu, Ke Gao, Chenyi Lei, Fei Fang

we summarize three practical challenges which are not well solved for multi-scenario modeling: (1) Lacking of fine-grained and decoupled information transfer controls among multiple scenarios.

Contrastive Learning Disentanglement +1

Enhancing Sequential Recommendation with Graph Contrastive Learning

no code implementations30 May 2022 Yixin Zhang, Yong liu, Yonghui Xu, Hao Xiong, Chenyi Lei, wei he, Lizhen Cui, Chunyan Miao

Specifically, GCL4SR employs a Weighted Item Transition Graph (WITG), built based on interaction sequences of all users, to provide global context information for each interaction and weaken the noise information in the sequence data.

Auxiliary Learning Contrastive Learning +1

Comparative Deep Learning of Hybrid Representations for Image Recommendations

no code implementations CVPR 2016 Chenyi Lei, Dong Liu, Weiping Li, Zheng-Jun Zha, Houqiang Li

In many image-related tasks, learning expressive and discriminative representations of images is essential, and deep learning has been studied for automating the learning of such representations.

Cannot find the paper you are looking for? You can Submit a new open access paper.