Search Results for author: Yongjun Chen

Found 12 papers, 6 papers with code

Towards More Robust and Accurate Sequential Recommendation with Cascade-guided Adversarial Training

no code implementations11 Apr 2023 Juntao Tan, Shelby Heinecke, Zhiwei Liu, Yongjun Chen, Yongfeng Zhang, Huan Wang

Two properties unique to the nature of sequential recommendation models may impair their robustness - the cascade effects induced during training and the model's tendency to rely too heavily on temporal information.

Sequential Recommendation

Generating Negative Samples for Sequential Recommendation

no code implementations7 Aug 2022 Yongjun Chen, Jia Li, Zhiwei Liu, Nitish Shirish Keskar, Huan Wang, Julian McAuley, Caiming Xiong

Due to the dynamics of users' interests and model updates during training, considering randomly sampled items from a user's non-interacted item set as negatives can be uninformative.

Sequential Recommendation

ELECRec: Training Sequential Recommenders as Discriminators

1 code implementation5 Apr 2022 Yongjun Chen, Jia Li, Caiming Xiong

A generator, as an auxiliary model, is trained jointly with the discriminator to sample plausible alternative next items and will be thrown out after training.

Sequential Recommendation

Improving Contrastive Learning with Model Augmentation

1 code implementation25 Mar 2022 Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong

However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.

Contrastive Learning Data Augmentation +2

Intent Contrastive Learning for Sequential Recommendation

1 code implementation5 Feb 2022 Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, Caiming Xiong

Specifically, we introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.

Contrastive Learning Model Optimization +3

Self-supervised Learning for Sequential Recommendation with Model Augmentation

no code implementations29 Sep 2021 Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong

However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.

Contrastive Learning Data Augmentation +2

Modeling Dynamic Attributes for Next Basket Recommendation

no code implementations23 Sep 2021 Yongjun Chen, Jia Li, Chenghao Liu, Chenxi Li, Markus Anderle, Julian McAuley, Caiming Xiong

However, properly integrating them into user interest models is challenging since attribute dynamics can be diverse such as time-interval aware, periodic patterns (etc.

Attribute Next-basket recommendation

Contrastive Self-supervised Sequential Recommendation with Robust Augmentation

1 code implementation14 Aug 2021 Zhiwei Liu, Yongjun Chen, Jia Li, Philip S. Yu, Julian McAuley, Caiming Xiong

In this paper, we investigate the application of contrastive Self-Supervised Learning (SSL) to the sequential recommendation, as a way to alleviate some of these issues.

Contrastive Learning Self-Supervised Learning +1

Learning Graph Pooling and Hybrid Convolutional Operations for Text Representations

1 code implementation21 Jan 2019 Hongyang Gao, Yongjun Chen, Shuiwang Ji

Another limitation of GCN when used on graph-based text representation tasks is that, GCNs do not consider the order information of nodes in graph.

Text Categorization

Transferable Adversarial Perturbations

no code implementations ECCV 2018 Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, Yong Yang

We first show that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black-box attacks.

Dense Transformer Networks

1 code implementation24 May 2017 Jun Li, Yongjun Chen, Lei Cai, Ian Davidson, Shuiwang Ji

The proposed dense transformer modules are differentiable, thus the entire network can be trained.

Image Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.