Search Results for author: Lirong Wu

Found 54 papers, 30 papers with code

Advances of Deep Learning in Protein Science: A Comprehensive Survey

no code implementations8 Mar 2024 Bozhen Hu, Cheng Tan, Lirong Wu, Jiangbin Zheng, Jun Xia, Zhangyang Gao, Zicheng Liu, Fandi Wu, Guijun Zhang, Stan Z. Li

Protein representation learning plays a crucial role in understanding the structure and function of proteins, which are essential biomolecules involved in various biological processes.

Drug Discovery Protein Function Prediction +2

A Teacher-Free Graph Knowledge Distillation Framework with Dual Self-Distillation

1 code implementation6 Mar 2024 Lirong Wu, Haitao Lin, Zhangyang Gao, Guojiang Zhao, Stan Z. Li

As a result, TGS enjoys the benefits of graph topology awareness in training but is free from data dependency in inference.

Knowledge Distillation

Decoupling Weighing and Selecting for Integrating Multiple Graph Pre-training Tasks

1 code implementation3 Mar 2024 Tianyu Fan, Lirong Wu, Yufei Huang, Haitao Lin, Cheng Tan, Zhangyang Gao, Stan Z. Li

In this paper, we identify two important collaborative processes for this topic: (1) select: how to select an optimal task combination from a given task pool based on their compatibility, and (2) weigh: how to weigh the selected tasks based on their importance.

Graph Representation Learning

MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embedding

1 code implementation22 Feb 2024 Lirong Wu, Yijun Tian, Yufei Huang, Siyuan Li, Haitao Lin, Nitesh V Chawla, Stan Z. Li

In addition, microenvironments defined in previous work are largely based on experimentally assayed physicochemical properties, for which the "vocabulary" is usually extremely small.

Computational Efficiency

Re-Dock: Towards Flexible and Realistic Molecular Docking with Diffusion Bridge

no code implementations18 Feb 2024 Yufei Huang, Odin Zhang, Lirong Wu, Cheng Tan, Haitao Lin, Zhangyang Gao, Siyuan Li, Stan. Z. Li

Accurate prediction of protein-ligand binding structures, a task known as molecular docking is crucial for drug design but remains challenging.

Molecular Docking

PSC-CPI: Multi-Scale Protein Sequence-Structure Contrasting for Efficient and Generalizable Compound-Protein Interaction Prediction

1 code implementation13 Feb 2024 Lirong Wu, Yufei Huang, Cheng Tan, Zhangyang Gao, Bozhen Hu, Haitao Lin, Zicheng Liu, Stan Z. Li

Compound-Protein Interaction (CPI) prediction aims to predict the pattern and strength of compound-protein interactions for rational drug discovery.

Drug Discovery

Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding

no code implementations12 Jan 2024 Bozhen Hu, Zelin Zang, Jun Xia, Lirong Wu, Cheng Tan, Stan Z. Li

Representing graph data in a low-dimensional space for subsequent tasks is the purpose of attributed graph embedding.

Graph Embedding

Masked Modeling for Self-supervised Representation Learning on Vision and Beyond

1 code implementation31 Dec 2023 Siyuan Li, Luyuan Zhang, Zedong Wang, Di wu, Lirong Wu, Zicheng Liu, Jun Xia, Cheng Tan, Yang Liu, Baigui Sun, Stan Z. Li

As the deep learning revolution marches on, self-supervised learning has garnered increasing attention in recent years thanks to its remarkable representation learning ability and the low dependence on labeled data.

Representation Learning Self-Supervised Learning

Protein 3D Graph Structure Learning for Robust Structure-based Protein Property Prediction

no code implementations14 Oct 2023 Yufei Huang, Siyuan Li, Jin Su, Lirong Wu, Odin Zhang, Haitao Lin, Jingqi Qi, Zihan Liu, Zhangyang Gao, Yuyang Liu, Jiangbin Zheng, Stan. ZQ. Li

To study this problem, we identify a Protein 3D Graph Structure Learning Problem for Robust Protein Property Prediction (PGSL-RP3), collect benchmark datasets, and present a protein Structure embedding Alignment Optimization framework (SAO) to mitigate the problem of structure embedding bias between the predicted and experimental protein structures.

Graph structure learning Property Prediction +2

Revisiting the Temporal Modeling in Spatio-Temporal Predictive Learning under A Unified View

no code implementations9 Oct 2023 Cheng Tan, Jue Wang, Zhangyang Gao, Siyuan Li, Lirong Wu, Jun Xia, Stan Z. Li

In this paper, we re-examine the two dominant temporal modeling approaches within the realm of spatio-temporal predictive learning, offering a unified perspective.

Self-Supervised Learning

OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning

2 code implementations NeurIPS 2023 Cheng Tan, Siyuan Li, Zhangyang Gao, Wenfei Guan, Zedong Wang, Zicheng Liu, Lirong Wu, Stan Z. Li

Spatio-temporal predictive learning is a learning paradigm that enables models to learn spatial and temporal patterns by predicting future frames from given past frames in an unsupervised manner.

Weather Forecasting

Quantifying the Knowledge in GNNs for Reliable Distillation into MLPs

1 code implementation9 Jun 2023 Lirong Wu, Haitao Lin, Yufei Huang, Stan Z. Li

To bridge the gaps between topology-aware Graph Neural Networks (GNNs) and inference-efficient Multi-Layer Perceptron (MLPs), GLNN proposes to distill knowledge from a well-trained teacher GNN into a student MLP.

Extracting Low-/High- Frequency Knowledge from Graph Neural Networks and Injecting it into MLPs: An Effective GNN-to-MLP Distillation Framework

1 code implementation18 May 2023 Lirong Wu, Haitao Lin, Yufei Huang, Tianyu Fan, Stan Z. Li

Furthermore, we identified a potential information drowning problem for existing GNN-to-MLP distillation, i. e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during distillation; we have described in detail what it represents, how it arises, what impact it has, and how to deal with it.

Cross-Gate MLP with Protein Complex Invariant Embedding is A One-Shot Antibody Designer

1 code implementation21 Apr 2023 Cheng Tan, Zhangyang Gao, Lirong Wu, Jun Xia, Jiangbin Zheng, Xihong Yang, Yue Liu, Bozhen Hu, Stan Z. Li

In this paper, we propose a \textit{simple yet effective} model that can co-design 1D sequences and 3D structures of CDRs in a one-shot manner.

Specificity

Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias

1 code implementation29 Mar 2023 Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, Stan Z. Li

It has become cognitive inertia to employ cross-entropy loss function in classification related tasks.

Data-Efficient Protein 3D Geometric Pretraining via Refinement of Diffused Protein Structure Decoy

no code implementations5 Feb 2023 Yufei Huang, Lirong Wu, Haitao Lin, Jiangbin Zheng, Ge Wang, Stan Z. Li

Learning meaningful protein representation is important for a variety of biological downstream tasks such as structure-based drug design.

A Survey on Protein Representation Learning: Retrospect and Prospect

1 code implementation31 Dec 2022 Lirong Wu, Yufei Huang, Haitao Lin, Stan Z. Li

To pave the way for AI researchers with little bioinformatics background, we present a timely and comprehensive review of PRL formulations and existing PRL methods from the perspective of model architectures, pretext tasks, and downstream applications.

Representation Learning

Integration of Pre-trained Protein Language Models into Geometric Deep Learning Networks

1 code implementation7 Dec 2022 Fang Wu, Lirong Wu, Dragomir Radev, Jinbo Xu, Stan Z. Li

Geometric deep learning has recently achieved great success in non-Euclidean domains, and learning on 3D structures of large biomolecules is emerging as a distinct research area.

Protein Interface Prediction Representation Learning

Teaching Yourself: Graph Self-Distillation on Neighborhood for Node Classification

no code implementations5 Oct 2022 Lirong Wu, Jun Xia, Haitao Lin, Zhangyang Gao, Zicheng Liu, Guojiang Zhao, Stan Z. Li

Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications.

Classification Node Classification

Are Gradients on Graph Structure Reliable in Gray-box Attacks?

1 code implementation7 Aug 2022 Zihan Liu, Yun Luo, Lirong Wu, Siyuan Li, Zicheng Liu, Stan Z. Li

These errors arise from rough gradient usage due to the discreteness of the graph structure and from the unreliability in the meta-gradient on the graph structure.

Computational Efficiency

Exploring Generative Neural Temporal Point Process

1 code implementation3 Aug 2022 Haitao Lin, Lirong Wu, Guojiang Zhao, Pai Liu, Stan Z. Li

While lots of previous works have focused on `goodness-of-fit' of TPP models by maximizing the likelihood, their predictive performance is unsatisfactory, which means the timestamps generated by models are far apart from true observations.

Denoising

CoSP: Co-supervised pretraining of pocket and ligand

no code implementations23 Jun 2022 Zhangyang Gao, Cheng Tan, Lirong Wu, Stan Z. Li

Can we inject the pocket-ligand interaction knowledge into the pre-trained model and jointly learn their chemical space?

Contrastive Learning Specificity

SimVP: Simpler yet Better Video Prediction

3 code implementations CVPR 2022 Zhangyang Gao, Cheng Tan, Lirong Wu, Stan Z. Li

From CNN, RNN, to ViT, we have witnessed remarkable advancements in video prediction, incorporating auxiliary inputs, elaborate neural architectures, and sophisticated training strategies.

Video Prediction

Hyperspherical Consistency Regularization

1 code implementation CVPR 2022 Cheng Tan, Zhangyang Gao, Lirong Wu, Siyuan Li, Stan Z. Li

Though it benefits from taking advantage of both feature-dependent information from self-supervised learning and label-dependent information from supervised learning, this scheme remains suffering from bias of the classifier.

Contrastive Learning Self-Supervised Learning +1

Discovering and Explaining the Representation Bottleneck of Graph Neural Networks from Multi-order Interactions

1 code implementation15 May 2022 Fang Wu, Siyuan Li, Lirong Wu, Dragomir Radev, Stan Z. Li

Graph neural networks (GNNs) mainly rely on the message-passing paradigm to propagate node features and build interactions, and different graph learning tasks require different ranges of node interactions.

graph construction Graph Learning +2

STONet: A Neural-Operator-Driven Spatio-temporal Network

no code implementations18 Apr 2022 Haitao Lin, Guojiang Zhao, Lirong Wu, Stan Z. Li

Graph-based spatio-temporal neural networks are effective to model the spatial dependency among discrete points sampled irregularly from unstructured grids, thanks to the great expressiveness of graph neural networks.

Time Series Time Series Analysis

Harnessing Hard Mixed Samples with Decoupled Regularizer

1 code implementation NeurIPS 2023 Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li

However, we found that the extra optimizing step may be redundant because label-mismatched mixed samples are informative hard mixed samples for deep models to localize discriminative features.

Data Augmentation

SemiRetro: Semi-template framework boosts deep retrosynthesis prediction

no code implementations12 Feb 2022 Zhangyang Gao, Cheng Tan, Lirong Wu, Stan Z. Li

Experimental results show that SemiRetro significantly outperforms both existing TB and TF methods.

Graph Learning Retrosynthesis

SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation

1 code implementation7 Feb 2022 Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, Stan Z. Li

Furthermore, we devise adversarial training scheme, dubbed \textbf{AT-SimGRACE}, to enhance the robustness of graph contrastive learning and theoretically explain the reasons.

Contrastive Learning Data Augmentation +1

An Empirical Study: Extensive Deep Temporal Point Process

1 code implementation19 Oct 2021 Haitao Lin, Cheng Tan, Lirong Wu, Zhangyang Gao, Stan. Z. Li

In this paper, we first review recent research emphasis and difficulties in modeling asynchronous event sequences with deep temporal point process, which can be concluded into four fields: encoding of history sequence, formulation of conditional intensity function, relational discovery of events and learning approaches for optimization.

Graph structure learning Variational Inference

ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning

1 code implementation5 Oct 2021 Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z. Li

Contrastive Learning (CL) has emerged as a dominant technique for unsupervised representation learning which embeds augmented versions of the anchor close to each other (positive samples) and pushes the embeddings of other samples (negatives) apart.

Contrastive Learning Representation Learning

Git: Clustering Based on Graph of Intensity Topology

2 code implementations4 Oct 2021 Zhangyang Gao, Haitao Lin, Cheng Tan, Lirong Wu, Stan. Z Li

\textbf{A}ccuracy, \textbf{R}obustness to noises and scales, \textbf{I}nterpretability, \textbf{S}peed, and \textbf{E}asy to use (ARISE) are crucial requirements of a good clustering algorithm.

Clustering Clustering Algorithms Evaluation

Beyond Message Passing Paradigm: Training Graph Data with Consistency Constraints

no code implementations29 Sep 2021 Lirong Wu, Stan Z. Li

Specifically, the GCL framework is optimized with three well-designed consistency constraints: neighborhood consistency, label consistency, and class-center consistency.

Co-learning: Learning from Noisy Labels with Self-supervision

1 code implementation5 Aug 2021 Cheng Tan, Jun Xia, Lirong Wu, Stan Z. Li

Noisy labels, resulting from mistakes in manual labeling or webly data collecting for supervised learning, can cause neural networks to overfit the misleading information and degrade the generalization performance.

Learning with noisy labels Self-Supervised Learning

Self-supervised Learning on Graphs: Contrastive, Generative,or Predictive

1 code implementation16 May 2021 Lirong Wu, Haitao Lin, Zhangyang Gao, Cheng Tan, Stan. Z. Li

In this survey, we extend the concept of SSL, which first emerged in the fields of computer vision and natural language processing, to present a timely and comprehensive review of existing SSL techniques for graph data.

Self-Supervised Learning

AutoMix: Unveiling the Power of Mixup for Stronger Classifiers

2 code implementations24 Mar 2021 Zicheng Liu, Siyuan Li, Di wu, Zihan Liu, ZhiYuan Chen, Lirong Wu, Stan Z. Li

Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i. e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework.

Classification Data Augmentation +3

Conditional Local Convolution for Spatio-temporal Meteorological Forecasting

1 code implementation4 Jan 2021 Haitao Lin, Zhangyang Gao, Yongjie Xu, Lirong Wu, Ling Li, Stan. Z. Li

We further propose the distance and orientation scaling terms to reduce the impacts of irregular spatial distribution.

Spatio-Temporal Forecasting Weather Forecasting

Deep Manifold Computing and Visualization Using Elastic Locally Isometric Smoothness

no code implementations1 Jan 2021 Stan Z. Li, Zelin Zang, Lirong Wu

The ability to preserve local geometry of highly nonlinear manifolds in high dimensional spaces and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold computing, nonlinear dimensionality reduction (NLDR) and visualization.

Dimensionality Reduction

Towards Robust Graph Neural Networks against Label Noise

no code implementations1 Jan 2021 Jun Xia, Haitao Lin, Yongjie Xu, Lirong Wu, Zhangyang Gao, Siyuan Li, Stan Z. Li

A pseudo label is computed from the neighboring labels for each node in the training set using LP; meta learning is utilized to learn a proper aggregation of the original and pseudo label as the final label.

Attribute Learning with noisy labels +3

Consistent Representation Learning for High Dimensional Data Analysis

no code implementations1 Dec 2020 Stan Z. Li, Lirong Wu, Zelin Zang

In this paper, we propose a novel neural network-based method, called Consistent Representation Learning (CRL), to accomplish the three associated tasks end-to-end and improve the consistencies.

Clustering Dimensionality Reduction +2

Deep Manifold Transformation for Nonlinear Dimensionality Reduction

no code implementations28 Oct 2020 Stan Z. Li, Zelin Zang, Lirong Wu

The LGP constraints constitute the loss for deep manifold learning and serve as geometric regularizers for NLDR network training.

Dimensionality Reduction

Invertible Manifold Learning for Dimension Reduction

1 code implementation7 Oct 2020 Siyuan Li, Haitao Lin, Zelin Zang, Lirong Wu, Jun Xia, Stan Z. Li

Dimension reduction (DR) aims to learn low-dimensional representations of high-dimensional data with the preservation of essential information.

Dimensionality Reduction

Deep Clustering and Representation Learning that Preserves Geometric Structures

no code implementations28 Sep 2020 Lirong Wu, Zicheng Liu, Zelin Zang, Jun Xia, Siyuan Li, Stan Z. Li

To overcome the problem that clusteringoriented losses may deteriorate the geometric structure of embeddings in the latent space, an isometric loss is proposed for preserving intra-manifold structure locally and a ranking loss for inter-manifold structure globally.

Clustering Deep Clustering +1

Generalized Clustering and Multi-Manifold Learning with Geometric Structure Preservation

1 code implementation21 Sep 2020 Lirong Wu, Zicheng Liu, Zelin Zang, Jun Xia, Siyuan Li, Stan Z. Li

Though manifold-based clustering has become a popular research topic, we observe that one important factor has been omitted by these works, namely that the defined clustering loss may corrupt the local and global structure of the latent space.

Clustering Deep Clustering +1

Markov-Lipschitz Deep Learning

2 code implementations15 Jun 2020 Stan Z. Li, Zelin Zang, Lirong Wu

We propose a novel framework, called Markov-Lipschitz deep learning (MLDL), to tackle geometric deterioration caused by collapse, twisting, or crossing in vector-based neural network transformations for manifold-based representation learning and manifold data generation.

Dimensionality Reduction Representation Learning +1

A GAN-based Tunable Image Compression System

no code implementations18 Jan 2020 Lirong Wu, Kejie Huang, Haibin Shen

The method of importance map has been widely adopted in DNN-based lossy image compression to achieve bit allocation according to the importance of image contents.

Generative Adversarial Network Image Compression +2

A Foreground-background Parallel Compression with Residual Encoding for Surveillance Video

no code implementations18 Jan 2020 Lirong Wu, Kejie Huang, Haibin Shen, Lianli Gao

In this paper, we propose a video compression method that extracts and compresses the foreground and background of the video separately.

Video Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.