1 code implementation • 5 Apr 2022 • Yongjun Chen, Jia Li, Caiming Xiong
A generator, as an auxiliary model, is trained jointly with the discriminator to sample plausible alternative next items and will be thrown out after training.
1 code implementation • 25 Mar 2022 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
1 code implementation • 5 Feb 2022 • Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, Caiming Xiong
Specifically, we introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
no code implementations • 29 Sep 2021 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
no code implementations • 23 Sep 2021 • Yongjun Chen, Jia Li, Chenghao Liu, Chenxi Li, Markus Anderle, Julian McAuley, Caiming Xiong
However, properly integrating them into user interest models is challenging since attribute dynamics can be diverse such as time-interval aware, periodic patterns (etc.
1 code implementation • 14 Aug 2021 • Zhiwei Liu, Yongjun Chen, Jia Li, Philip S. Yu, Julian McAuley, Caiming Xiong
In this paper, we investigate the application of contrastive Self-Supervised Learning (SSL) to the sequential recommendation, as a way to alleviate some of these issues.
no code implementations • Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) 2019 • Jun Li, Yongjun Chen, Lei Cai, Ian Davidson, Shuiwang Ji
The proposed dense transformer modules are differentiable, thus the entire network can be trained.
Ranked #1 on
Electron Microscopy Image Segmentation
on SNEMI3D
Electron Microscopy Image Segmentation
Semantic Segmentation
1 code implementation • 21 Jan 2019 • Hongyang Gao, Yongjun Chen, Shuiwang Ji
Another limitation of GCN when used on graph-based text representation tasks is that, GCNs do not consider the order information of nodes in graph.
no code implementations • ECCV 2018 • Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, Yong Yang
We first show that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black-box attacks.
1 code implementation • 24 May 2017 • Jun Li, Yongjun Chen, Lei Cai, Ian Davidson, Shuiwang Ji
The proposed dense transformer modules are differentiable, thus the entire network can be trained.