no code implementations • 10 Dec 2024 • Hao Chen, Kai Yi, Lin Liu, Yu Guang Wang
To enhance the scalability of score matching, we have developed a new parent-finding subroutine for leaf nodes in DAGs, significantly accelerating the most time-consuming part of the process: the pruning step.
no code implementations • 9 Nov 2024 • Zan Chen, Yungeng Liu, Yu Guang Wang, Yiqing Shen
This study presents a comprehensive validation of TourSynbio-Agent through five diverse case studies spanning both computational (dry lab) and experimental (wet lab) protein engineering.
no code implementations • 9 Nov 2024 • Yungeng Liu, Zan Chen, Yu Guang Wang, Yiqing Shen
The exponential growth in protein-related databases and scientific literature, combined with increasing demands for efficient biological information retrieval, has created an urgent need for unified and accessible search methods in protein engineering research.
1 code implementation • 7 Nov 2024 • Yungeng Liu, Zan Chen, Yu Guang Wang, Yiqing Shen
By bridging the gap between DL and biologists' domain expertise, AutoPE empowers researchers to leverage DL without extensive programming knowledge.
1 code implementation • 4 Nov 2024 • Taoyu Wu, Yu Guang Wang, Yiqing Shen
Protein inverse folding aims to identify viable amino acid sequences that can fold into given protein structures, enabling the design of novel proteins with desired functions for applications in drug discovery, enzyme engineering, and biomaterial development.
1 code implementation • 4 Nov 2024 • Xiaozhu Yu, Kai Yi, Yu Guang Wang, Yiqing Shen
kcatDiffuser is a graph diffusion model guided by a regressor, enabling the prediction of amino acid mutations at multiple random positions simultaneously.
no code implementations • 29 Aug 2024 • Chong Wang, Mengyao Li, Junjun He, Zhongruo Wang, Erfan Darzi, Zan Chen, Jin Ye, Tianbin Li, Yanzhou Su, Jing Ke, Kaili Qu, Shuxin Li, Yi Yu, Pietro Liò, Tianyun Wang, Yu Guang Wang, Yiqing Shen
To address these challenges, we also identify future research directions of LLM in biomedicine including federated learning methods to preserve data privacy and integrating explainable AI methodologies to enhance the transparency of LLMs.
1 code implementation • 27 Aug 2024 • Yiqing Shen, Zan Chen, Michail Mamalakis, Yungeng Liu, Tianbin Li, Yanzhou Su, Junjun He, Pietro Liò, Yu Guang Wang
While large language models (LLMs) have achieved much progress in the domain of natural language processing, their potential in protein engineering remains largely unexplored.
no code implementations • 9 Jun 2024 • Paulina Kulytė, Francisco Vargas, Simon Valentin Mathis, Yu Guang Wang, José Miguel Hernández-Lobato, Pietro Liò
Our model, DiffForce, employs forces to guide the diffusion sampling process, effectively blending the two distributions.
1 code implementation • 8 Jun 2024 • Yiqing Shen, Zan Chen, Michail Mamalakis, Luhan He, Haiyang Xia, Tianbin Li, Yanzhou Su, Junjun He, Yu Guang Wang
The parallels between protein sequences and natural language in their sequential structures have inspired the application of large language models (LLMs) to protein understanding.
1 code implementation • 21 May 2024 • Keke Huang, Yu Guang Wang, Ming Li, and Pietro Liò
Our extensive experiments, conducted on a diverse range of real-world and synthetic datasets with varying degrees of heterophily, support the superiority of UniFilter.
no code implementations • 21 Apr 2024 • Yiqing Shen, Outongyi Lv, Houying Zhu, Yu Guang Wang
Large language models (LLMs) have garnered considerable attention for their proficiency in tackling intricate tasks, particularly leveraging their capacities for zero-shot and in-context learning.
no code implementations • 18 Mar 2024 • Wei Duan, Jie Lu, Yu Guang Wang, Junyu Xuan
Experiments on various real-world graph datasets demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance.
no code implementations • 9 Nov 2023 • Jialin Chen, Yuelin Wang, Cristian Bodnar, Rex Ying, Pietro Lio, Yu Guang Wang
However, recursively aggregating neighboring information with graph convolutions leads to indistinguishable node features in deep layers, which is known as the over-smoothing issue.
1 code implementation • NeurIPS 2023 • Kai Yi, Bingxin Zhou, Yiqing Shen, Pietro Liò, Yu Guang Wang
In contrast, diffusion probabilistic models, as an emerging genre of generative approaches, offer the potential to generate a diverse set of sequence candidates for determined protein backbones.
no code implementations • 8 Jun 2023 • Yang Tan, Bingxin Zhou, Yuanhong Jiang, Yu Guang Wang, Liang Hong
Directed evolution plays an indispensable role in protein engineering that revises existing protein sequences to attain new or enhanced functions.
no code implementations • 13 Apr 2023 • Bingxin Zhou, Outongyi Lv, Kai Yi, Xinye Xiong, Pan Tan, Liang Hong, Yu Guang Wang
Directed evolution as a widely-used engineering strategy faces obstacles in finding desired mutants from the massive size of candidate modifications.
no code implementations • 5 Apr 2023 • Xinye Xiong, Bingxin Zhou, Yu Guang Wang
Advances in deep learning models have revolutionized the study of biomolecule systems and their mechanisms.
2 code implementations • CVPR 2023 • Chenxin Xu, Robby T. Tan, Yuhong Tan, Siheng Chen, Yu Guang Wang, Xinchao Wang, Yanfeng Wang
In motion prediction tasks, maintaining motion equivariance under Euclidean geometric transformations and invariance of agent interaction is a critical and fundamental principle.
Ranked #1 on
Human Pose Forecasting
on HARPER
no code implementations • 28 Feb 2023 • Xinliang Liu, Bingxin Zhou, Chutian Zhang, Yu Guang Wang
Graph neural networks (GNNs) have achieved champion in wide applications.
no code implementations • 29 Dec 2022 • Mingchen Li, Liqi Kang, Yi Xiong, Yu Guang Wang, Guisheng Fan, Pan Tan, Liang Hong
Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism.
1 code implementation • 6 Aug 2022 • Pradeep Kr. Banerjee, Kedar Karhadkar, Yu Guang Wang, Uri Alon, Guido Montúfar
We compare the spectral expansion properties of our algorithm with that of an existing curvature-based non-local rewiring strategy.
no code implementations • 17 Jun 2022 • Kai Yi, Jialin Chen, Yu Guang Wang, Bingxin Zhou, Pietro Liò, Yanan Fan, Jan Hamann
This paper develops a rotation-invariant needlet convolution for rotation group SO(3) to distill multiscale information of spherical signals.
1 code implementation • 15 Jun 2022 • Yiqing Shen, Bingxin Zhou, Xinye Xiong, Ruitian Gao, Yu Guang Wang
Existing solutions heavily rely on convolutional neural networks (CNNs) for global pixel-level analysis, leaving the underlying local geometric structure such as the interaction between cells in the tumor microenvironment unexplored.
1 code implementation • 11 Jun 2022 • Yuelin Wang, Kai Yi, Xinliang Liu, Yu Guang Wang, Shi Jin
Neural message passing is a basic feature extraction unit for graph-structured data considering neighboring node features in network propagation from one layer to the next.
no code implementations • 1 Jun 2022 • Hao Chen, Yu Guang Wang, Huan Xiong
In particular, we obtain an optimal upper bound for the maximum number of linear regions for one-layer GCNs, and the upper and lower bounds for multi-layer GCNs.
1 code implementation • 30 May 2022 • Bingxin Zhou, Xuebin Zheng, Yu Guang Wang, Ming Li, Junbin Gao
Learning efficient graph representation is the key to favorably addressing downstream tasks on graphs, such as node or graph property prediction.
1 code implementation • 10 Feb 2022 • Bingxin Zhou, Yuanhong Jiang, Yu Guang Wang, Jingwei Liang, Junbin Gao, Shirui Pan, Xiaoqun Zhang
The performance of graph representation learning is affected by the quality of graph input.
1 code implementation • 5 Nov 2021 • Bingxin Zhou, Ruikun Li, Xuebin Zheng, Yu Guang Wang, Junbin Gao
As graph data collected from the real world is merely noise-free, a practical representation of graphs should be robust to noise.
2 code implementations • NeurIPS 2021 • Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yu Guang Wang, Pietro Liò, Guido Montúfar, Michael Bronstein
Nevertheless, these models can be severely constrained by the rigid combinatorial structure of Simplicial Complexes (SCs).
Ranked #1 on
Graph Regression
on ZINC 100k
1 code implementation • 18 Jun 2021 • Yixin Liu, Shirui Pan, Yu Guang Wang, Fei Xiong, Liang Wang, Qingfeng Chen, Vincent CS Lee
Detecting anomalies for dynamic graphs has drawn increasing attention due to their wide applications in social networks, e-commerce, and cybersecurity.
no code implementations • ICLR Workshop GTRL 2021 • Bingxin Zhou, Xuebin Zheng, Yu Guang Wang, Ming Li, Junbin Gao
Geometric deep learning that employs the geometric and topological features of data has attracted increasing attention in deep neural networks.
2 code implementations • ICLR Workshop GTRL 2021 • Cristian Bodnar, Fabrizio Frasca, Yu Guang Wang, Nina Otter, Guido Montúfar, Pietro Liò, Michael Bronstein
The pairwise interaction paradigm of graph machine learning has predominantly governed the modelling of relational systems.
1 code implementation • 13 Feb 2021 • Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yu Guang Wang, Pietro Lio, Ming Li, Guido Montufar
The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many node and graph prediction tasks.
1 code implementation • 12 Dec 2020 • Xuebin Zheng, Bingxin Zhou, Yu Guang Wang, Xiaosheng Zhuang
Graph representation learning has many real-world applications, from super-resolution imaging, 3D computer vision to drug repurposing, protein classification, social networks analysis.
no code implementations • 19 Aug 2020 • Ming Li, Sho Sonoda, Feilong Cao, Yu Guang Wang, Jiye Liang
Despite the well-known fact that a neural network is a universal approximator, in this study, we mathematically show that when hidden parameters are distributed in a bounded domain, the network may not achieve zero approximation error.
no code implementations • 22 Jul 2020 • Xuebin Zheng, Bingxin Zhou, Ming Li, Yu Guang Wang, Junbin Gao
In this paper, we propose a framework for graph neural networks with multiresolution Haar-like wavelets, or MathNet, with interrelated convolution and pooling strategies.
no code implementations • 18 Jul 2020 • Guido Montúfar, Yu Guang Wang
Learning mappings of data on manifolds is an important topic in contemporary machine learning, with applications in astrophysics, geophysics, statistical physics, medical diagnosis, biochemistry, 3D object analysis.
1 code implementation • NeurIPS 2020 • Zheng Ma, Junyu Xuan, Yu Guang Wang, Ming Li, Pietro Lio
Borrowing ideas from physics, we propose a path integral based graph neural networks (PAN) for classification and regression tasks on graphs.
1 code implementation • 31 Jan 2020 • Kai Yi, Yi Guo, Yanan Fan, Jan Hamann, Yu Guang Wang
The noise of the CMB map has a significant impact on the estimation precision for cosmological parameters.
2 code implementations • 31 Jan 2020 • Nicole Hallett, Kai Yi, Josef Dick, Christopher Hodge, Gerard Sutton, Yu Guang Wang, Jingjing You
Currently, there is no cure for keratoconus other than corneal transplantation for advanced stage keratoconus or corneal cross-linking, which can only halt KC progression.
no code implementations • 6 Oct 2019 • Shao-Bo Lin, Yu Guang Wang, Ding-Xuan Zhou
This paper develops distributed filtered hyperinterpolation for noisy data on the sphere, which assigns the data fitting task to multiple servers to find a good approximation of the mapping of input and output data.
1 code implementation • ICML 2020 • Yu Guang Wang, Ming Li, Zheng Ma, Guido Montufar, Xiaosheng Zhuang, Yanan Fan
Deep Graph Neural Networks (GNNs) are useful models for graph classification and graph-based regression tasks.
no code implementations • 25 Sep 2019 • Yu Guang Wang, Ming Li, Zheng Ma, Guido Montufar, Xiaosheng Zhuang, Yanan Fan
The input of each pooling layer is transformed by the compressive Haar basis of the corresponding clustering.
2 code implementations • 31 Jul 2019 • Quoc T. Le Gia, Ming Li, Yu Guang Wang
The forward FaVeST evaluates the Fourier coefficients and has a computational cost proportional to $N\log \sqrt{N}$ for $N$ number of evaluation points.
Numerical Analysis Computational Complexity Numerical Analysis 65T50, 37C10, 33C55, 65D30 F.2.1; G.4; G.1.4; G.1.2
no code implementations • 10 Jul 2019 • Ming Li, Zheng Ma, Yu Guang Wang, Xiaosheng Zhuang
Graph Neural Networks (GNNs) have become a topic of intense research recently due to their powerful capability in high-dimensional classification and regression tasks for graph-structured data.