no code implementations • 2 Oct 2022 • Lu Lin, Jinghui Chen, Hongning Wang
Graph contrastive learning (GCL), as an emerging self-supervised learning technique on graphs, aims to learn representations via instance discrimination.
no code implementations • 30 Sep 2022 • Songtao Liu, Zhengkai Tu, Minkai Xu, Zuobai Zhang, Peilin Zhao, Jian Tang, Rex Ying, Lu Lin, Dinghao Wu
Comprehensive experiments show that by fusing in context information over routes, our model significantly improves the performance of retrosynthetic planning over baselines that are not context-aware, especially for long synthetic routes.
no code implementations • 29 Sep 2022 • Songtao Liu, Rex Ying, Hanze Dong, Lu Lin, Jinghui Chen, Dinghao Wu
However, the analysis of implicit denoising effect in graph neural networks remains open.
no code implementations • 10 Jun 2022 • Lu Lin, Weiyu Li
A basic condition for efficient transfer learning is the similarity between a target model and source models.
no code implementations • 5 May 2022 • Yujia Wang, Lu Lin, Jinghui Chen
We show that in the nonconvex stochastic optimization setting, our proposed FedCAMS achieves the same convergence rate of $O(\frac{1}{\sqrt{TKm}})$ as its non-compressed counterparts.
no code implementations • 1 Nov 2021 • Yujia Wang, Lu Lin, Jinghui Chen
We prove that the proposed communication-efficient distributed adaptive gradient method converges to the first-order stationary point with the same iteration complexity as uncompressed vanilla AMSGrad in the stochastic nonconvex optimization setting.
no code implementations • 1 Nov 2021 • Lu Lin, Ethan Blaser, Hongning Wang
Graph Convolutional Networks (GCNs) have fueled a surge of research interest due to their encouraging performance on graph learning tasks, but they are also shown vulnerability to adversarial attacks.
no code implementations • 31 Oct 2021 • Lu Lin, Ethan Blaser, Hongning Wang
The exploitation of graph structures is the key to effectively learning representations of nodes that preserve useful information in graphs.
no code implementations • 26 Oct 2021 • Nan Wang, Lu Lin, Jundong Li, Hongning Wang
In this paper, we propose a principled new way for unbiased graph embedding by learning node embeddings from an underlying bias-free graph, which is not influenced by sensitive node attributes.
no code implementations • 21 Jan 2021 • Xiaoyu Ma, Lu Lin, Yujie Gai
The paper presents a general framework for online updating variable selection and parameter estimation in generalized linear models with streaming datasets.
Variable Selection
Methodology
1 code implementation • 1 Dec 2019 • Lin Gong, Lu Lin, Weihao Song, Hongning Wang
Inspired by the concept of user schema in social psychology, we take a new perspective to perform user representation learning by constructing a shared latent space to capture the dependency among different modalities of user-generated data.