no code implementations • 23 Jan 2024 • Chuanbo Liu, Yu Fu, Lu Lin, Elliot L. Elson, Jin Wang
This approach, when combined with the analytical capabilities of a sophisticated deep neural network, enables the accurate estimation of rate constants from observational data in a broad range of biochemical reaction networks.
no code implementations • 2 Oct 2023 • Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu
A natural question is "could alignment really prevent those open-sourced large language models from being misused to generate undesired content?''.
1 code implementation • 18 Sep 2023 • Bochuan Cao, Yuanpu Cao, Lu Lin, Jinghui Chen
In this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks.
no code implementations • 18 Jun 2023 • Yi Nian, Yurui Chang, Wei Jin, Lu Lin
Graph neural networks (GNNs) have emerged as a powerful model to capture critical graph patterns.
1 code implementation • 2 Oct 2022 • Lu Lin, Jinghui Chen, Hongning Wang
Graph contrastive learning (GCL), as an emerging self-supervised learning technique on graphs, aims to learn representations via instance discrimination.
1 code implementation • 30 Sep 2022 • Songtao Liu, Zhengkai Tu, Minkai Xu, Zuobai Zhang, Lu Lin, Rex Ying, Jian Tang, Peilin Zhao, Dinghao Wu
Current strategies use a decoupled approach of single-step retrosynthesis models and search algorithms, taking only the product as the input to predict the reactants for each planning step and ignoring valuable context information along the synthetic route.
no code implementations • 29 Sep 2022 • Songtao Liu, Rex Ying, Hanze Dong, Lu Lin, Jinghui Chen, Dinghao Wu
However, the analysis of implicit denoising effect in graph neural networks remains open.
no code implementations • 10 Jun 2022 • Lu Lin, Weiyu Li
A basic condition for efficient transfer learning is the similarity between a target model and source models.
1 code implementation • 5 May 2022 • Yujia Wang, Lu Lin, Jinghui Chen
We show that in the nonconvex stochastic optimization setting, our proposed FedCAMS achieves the same convergence rate of $O(\frac{1}{\sqrt{TKm}})$ as its non-compressed counterparts.
no code implementations • 1 Nov 2021 • Yujia Wang, Lu Lin, Jinghui Chen
We prove that the proposed communication-efficient distributed adaptive gradient method converges to the first-order stationary point with the same iteration complexity as uncompressed vanilla AMSGrad in the stochastic nonconvex optimization setting.
1 code implementation • 1 Nov 2021 • Lu Lin, Ethan Blaser, Hongning Wang
Graph Convolutional Networks (GCNs) have fueled a surge of research interest due to their encouraging performance on graph learning tasks, but they are also shown vulnerability to adversarial attacks.
no code implementations • 31 Oct 2021 • Lu Lin, Ethan Blaser, Hongning Wang
The exploitation of graph structures is the key to effectively learning representations of nodes that preserve useful information in graphs.
no code implementations • 26 Oct 2021 • Nan Wang, Lu Lin, Jundong Li, Hongning Wang
In this paper, we propose a principled new way for unbiased graph embedding by learning node embeddings from an underlying bias-free graph, which is not influenced by sensitive node attributes.
no code implementations • 21 Jan 2021 • Xiaoyu Ma, Lu Lin, Yujie Gai
The paper presents a general framework for online updating variable selection and parameter estimation in generalized linear models with streaming datasets.
Variable Selection Methodology
1 code implementation • 1 Dec 2019 • Lin Gong, Lu Lin, Weihao Song, Hongning Wang
Inspired by the concept of user schema in social psychology, we take a new perspective to perform user representation learning by constructing a shared latent space to capture the dependency among different modalities of user-generated data.