Search Results for author: Enneng Yang

Found 10 papers, 4 papers with code

Repeated Padding as Data Augmentation for Sequential Recommendation

no code implementations11 Mar 2024 Yizhou Dang, YuTing Liu, Enneng Yang, Guibing Guo, Linying Jiang, Xingwei Wang, Jianzhe Zhao

Specifically, we use the original interaction sequences as the padding content and fill it to the padding positions during model training.

Common Sense Reasoning Data Augmentation +1

Representation Surgery for Multi-Task Model Merging

1 code implementation5 Feb 2024 Enneng Yang, Li Shen, Zhenyi Wang, Guibing Guo, Xiaojun Chen, Xingwei Wang, DaCheng Tao

That is, there is a significant discrepancy in the representation distribution between the merged and individual models, resulting in poor performance of merged MTL.

Computational Efficiency Multi-Task Learning

ID Embedding as Subtle Features of Content and Structure for Multimodal Recommendation

no code implementations10 Nov 2023 YuTing Liu, Enneng Yang, Yizhou Dang, Guibing Guo, Qiang Liu, Yuliang Liang, Linying Jiang, Xingwei Wang

In this paper, we revisit the value of ID embeddings for multimodal recommendation and conduct a thorough study regarding its semantics, which we recognize as subtle features of content and structures.

Contrastive Learning Multimodal Recommendation

AdaMerging: Adaptive Model Merging for Multi-Task Learning

1 code implementation4 Oct 2023 Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guibing Guo, Xingwei Wang, DaCheng Tao

This approach aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.

Multi-Task Learning

Continual Learning From a Stream of APIs

no code implementations31 Aug 2023 Enneng Yang, Zhenyi Wang, Li Shen, Nan Yin, Tongliang Liu, Guibing Guo, Xingwei Wang, DaCheng Tao

Next, we train the CL model by minimizing the gap between the responses of the CL model and the black-box API on synthetic data, to transfer the API's knowledge to the CL model.

Continual Learning

A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

1 code implementation16 Jul 2023 Zhenyi Wang, Enneng Yang, Li Shen, Heng Huang

Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting.

Continual Learning Federated Learning +1

Data Augmented Flatness-aware Gradient Projection for Continual Learning

no code implementations ICCV 2023 Enneng Yang, Li Shen, Zhenyi Wang, Shiwei Liu, Guibing Guo, Xingwei Wang

In this paper, we first revisit the gradient projection method from the perspective of flatness of loss surface, and find that unflatness of the loss surface leads to catastrophic forgetting of the old tasks when the projection constraint is reduced to improve the performance of new tasks.

Continual Learning

Uniform Sequence Better: Time Interval Aware Data Augmentation for Sequential Recommendation

1 code implementation16 Dec 2022 Yizhou Dang, Enneng Yang, Guibing Guo, Linying Jiang, Xingwei Wang, Xiaoxiao Xu, Qinghui Sun, Hong Liu

However, we observe that the time interval in a sequence may vary significantly different, and thus result in the ineffectiveness of user modeling due to the issue of \emph{preference drift}.

Data Augmentation Sequential Recommendation

AdaTask: A Task-aware Adaptive Learning Rate Approach to Multi-task Learning

no code implementations28 Nov 2022 Enneng Yang, Junwei Pan, Ximei Wang, Haibin Yu, Li Shen, Xihua Chen, Lei Xiao, Jie Jiang, Guibing Guo

In this paper, we propose to measure the task dominance degree of a parameter by the total updates of each task on this parameter.

Multi-Task Learning Recommendation Systems

Generalized Embedding Machines for Recommender Systems

no code implementations16 Feb 2020 Enneng Yang, Xin Xin, Li Shen, Guibing Guo

In this work, we propose an alternative approach to model high-order interaction signals in the embedding level, namely Generalized Embedding Machine (GEM).

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.