no code implementations • ICML 2020 • Shuang Li, Lu Wang, Ruizhi Zhang, xiaofu Chang, Xuqin Liu, Yao Xie, Yuan Qi, Le Song
We propose a modeling framework for event data, which excels in small data regime with the ability to incorporate domain knowledge.
no code implementations • 26 Sep 2021 • Yunfei Chu, xiaofu Chang, Kunyang Jia, Jingzhen Zhou, Hongxia Yang
In this paper, we propose a novel method, named Dynamic Sequential Graph Learning (DSGL), to enhance users or items' representations by utilizing collaborative information from the local sub-graphs associated with users or items.
no code implementations • 3 Jul 2021 • Hui Li, Xing Fu, Ruofan Wu, Jinyu Xu, Kai Xiao, xiaofu Chang, Weiqiang Wang, Shuai Chen, Leilei Shi, Tao Xiong, Yuan Qi
Deep learning provides a promising way to extract effective representations from raw data in an end-to-end fashion and has proven its effectiveness in various domains such as computer vision, natural language processing, etc.
3 code implementations • 17 May 2021 • Lu Wang, xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei zhang, Xiaofeng He, Le Song, Jingren Zhou, Hongxia Yang
Secondly, on top of the proposed graph transformer, we introduce a two-stream encoder that separately extracts representations from temporal neighborhoods associated with the two interaction nodes and then utilizes a co-attentional transformer to model inter-dependencies at a semantic level.
no code implementations • 25 Sep 2019 • xiaofu Chang, Jianfeng Wen, Xuqin Liu, Yanming Fang, Le Song, Yuan Qi
To model the dependency between latent dynamic representations of each node, we define a mixture of temporal cascades in which a node's neural representation depends on not only this node's previous representations but also the previous representations of related nodes that have interacted with this node.