Search Results for author: Yusu Wang

Found 36 papers, 15 papers with code

Comparing Graph Transformers via Positional Encodings

no code implementations22 Feb 2024 Mitchell Black, Zhengchao Wan, Gal Mishne, Amir Nayyeri, Yusu Wang

The distinguishing power of graph transformers is closely tied to the choice of positional encoding: features used to augment the base transformer with information about the graph.


NN-Steiner: A Mixed Neural-algorithmic Approach for the Rectilinear Steiner Minimum Tree Problem

no code implementations17 Dec 2023 Andrew B. Kahng, Robert R. Nerem, Yusu Wang, Chien-Yi Yang

On the methodology front, we propose NN-Steiner, which is a novel mixed neural-algorithmic framework for computing RSMTs that leverages the celebrated PTAS algorithmic framework of Arora to solve this problem (and other geometric optimization problems).

Combinatorial Optimization Layout Design

Cycle Invariant Positional Encoding for Graph Representation Learning

1 code implementation24 Nov 2023 Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, Chao Chen, Yusu Wang

To efficiently encode the space of all cycles, we start with a cycle basis (i. e., a minimal set of cycles generating the cycle space) which we compute via the kernel of the 1-dimensional Hodge Laplacian of the input graph.

Graph Learning Graph Representation Learning

Universal Representation of Permutation-Invariant Functions on Vectors and Tensors

no code implementations20 Oct 2023 Puoya Tabaghi, Yusu Wang

Restricting the domain of the functions to finite multisets of $D$-dimensional vectors, Deep Sets also provides a \emph{universal approximation} that requires a latent space dimension of $O(N^D)$ -- where $N$ is an upper bound on the size of input multisets.

Distances for Markov Chains, and Their Differentiation

1 code implementation16 Feb 2023 Tristan Brugère, Zhengchao Wan, Yusu Wang

Recently, in the graph learning and optimization communities, a range of new approaches have been developed for comparing graphs with node attributes, leveraging ideas such as the Optimal Transport (OT) and the Weisfeiler-Lehman (WL) graph isomorphism test.

Graph Learning

Understanding Oversquashing in GNNs through the Lens of Effective Resistance

1 code implementation14 Feb 2023 Mitchell Black, Zhengchao Wan, Amir Nayyeri, Yusu Wang

We propose to use total effective resistance as a bound of the total amount of oversquashing in a graph and provide theoretical justification for its use.

The Weisfeiler-Lehman Distance: Reinterpretation and Connection with GNNs

no code implementations1 Feb 2023 Samantha Chen, Sunhyuk Lim, Facundo Mémoli, Zhengchao Wan, Yusu Wang

This new interpretation connects the WL distance to the literature on distances for stochastic processes, which also makes the interpretation of the distance more accessible and intuitive.

On the Connection Between MPNN and Graph Transformer

1 code implementation27 Jan 2023 Chen Cai, Truong Son Hy, Rose Yu, Yusu Wang

Graph Transformer (GT) recently has emerged as a new paradigm of graph learning algorithms, outperforming the previously popular Message Passing Neural Network (MPNN) on multiple benchmarks.

Graph Classification Graph Learning +2

Principal Component Analysis in Space Forms

1 code implementation6 Jan 2023 Puoya Tabaghi, Michael Khanzadeh, Yusu Wang, Sivash Mirarab

Finding a low-dimensional Riemannian affine subspace for a set of points in a space form amounts to dimensionality reduction because, as we show, any such affine subspace is isometric to a space form of the same dimension and curvature.

Dimensionality Reduction

Implicit Graphon Neural Representation

1 code implementation7 Nov 2022 Xinyue Xia, Gal Mishne, Yusu Wang

We also show that our model is suitable for graph representation learning and graph generation.

Graph Generation Graph Representation Learning

Learning Ultrametric Trees for Optimal Transport Regression

1 code implementation21 Oct 2022 Samantha Chen, Puoya Tabaghi, Yusu Wang

For measures supported in discrete metric spaces, finding the optimal transport distance has cubic time complexity in the size of the space.


On the Convergence of Optimizing Persistent-Homology-Based Losses

no code implementations6 Jun 2022 Yikai Zhang, Jiachen Yao, Yusu Wang, Chao Chen

Topological loss based on persistent homology has shown promise in various applications.

Weisfeiler-Lehman meets Gromov-Wasserstein

no code implementations5 Feb 2022 Samantha Chen, Sunhyuk Lim, Facundo Mémoli, Zhengchao Wan, Yusu Wang

The WL distance is polynomial time computable and is also compatible with the WL test in the sense that the former is positive if and only if the WL test can distinguish the two involved graphs.

Isomorphism Testing Test

Neural Approximation of Graph Topological Features

1 code implementation28 Jan 2022 Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, Yusu Wang, Chao Chen

Topological features based on persistent homology capture high-order structural information so as to augment graph neural network methods.

Graph Learning Graph Representation Learning +1

Generative Coarse-Graining of Molecular Conformations

1 code implementation28 Jan 2022 Wujie Wang, Minkai Xu, Chen Cai, Benjamin Kurt Miller, Tess Smidt, Yusu Wang, Jian Tang, Rafael Gómez-Bombarelli

Coarse-graining (CG) of molecular simulations simplifies the particle representation by grouping selected atoms into pseudo-beads and drastically accelerates simulation.

Convergence of Invariant Graph Networks

no code implementations25 Jan 2022 Chen Cai, Yusu Wang

Building upon this result, we prove the convergence of $k$-IGN under the model of \citet{ruiz2020graphon}, where we access the edge weight but the convergence error is measured for graphon inputs.

NN-Baker: A Neural-network Infused Algorithmic Framework for Optimization Problems on Geometric Intersection Graphs

no code implementations NeurIPS 2021 Evan McCarty, Qi Zhao, Anastasios Sidiropoulos, Yusu Wang

This leads to a mixed algorithmic-ML framework, which we call NN-Baker that has the capacity to approximately solve a family of graph optimization problems (e. g, maximum independent set and minimum vertex cover) in time linear to input graph size, and only polynomial to approximation parameter.

Combinatorial Optimization

Equivariant geometric learning for digital rock physics: estimating formation factor and effective permeability tensors from Morse graph

no code implementations12 Apr 2021 Chen Cai, Nikolaos Vlassis, Lucas Magee, Ran Ma, Zeyu Xiong, Bahador Bahmani, Teng-Fong Wong, Yusu Wang, WaiChing Sun

Comparisons among predictions inferred from training the CNN and those from graph convolutional neural networks (GNN) with and without the equivariant constraint indicate that the equivariant graph neural network seems to perform better than the CNN and GNN without enforcing equivariant constraints.

Topology-Aware Segmentation Using Discrete Morse Theory

no code implementations ICLR 2021 Xiaoling Hu, Yusu Wang, Li Fuxin, Dimitris Samaras, Chao Chen

In the segmentation of fine-scale structures from natural and biomedical images, per-pixel accuracy is not the only metric of concern.

Image Segmentation Segmentation +1

Graph Coarsening with Neural Networks

1 code implementation ICLR 2021 Chen Cai, Dingkang Wang, Yusu Wang

As large-scale graphs become increasingly more prevalent, it poses significant computational challenges to process, extract and analyze large graph data.

A Note on Over-Smoothing for Graph Neural Networks

1 code implementation23 Jun 2020 Chen Cai, Yusu Wang

In this paper, we build upon previous results \cite{oono2019graph} to further analyze the over-smoothing effect in the general graph neural network architecture.

Understanding the Power of Persistence Pairing via Permutation Test

no code implementations16 Jan 2020 Chen Cai, Yusu Wang

For shape segmentation and classification, however, we note that persistence pairing shows significant power on most of the benchmark datasets, and improves over both summaries based on merely critical values, and those based on permutation tests.

General Classification Graph Classification +2

Road Network Reconstruction from Satellite Images with Machine Learning Supported by Topological Methods

no code implementations15 Sep 2019 Tamal K. Dey, Jiayuan Wang, Yusu Wang

Next, in a fully automatic framework, we leverage the power of the discrete-Morse based graph reconstruction algorithm to train a CNN from a collection of images without labelled data and use the same algorithm to produce the final output from the segmented images created by the trained CNN.

BIG-bench Machine Learning Graph Reconstruction

Learning metrics for persistence-based summaries and applications for graph classification

1 code implementation NeurIPS 2019 Qi Zhao, Yusu Wang

However often in practice, the choice of the weight function should depend on the nature of the specific type of data one considers, and it is thus highly desirable to learn a best weight function (and thus metric for persistence diagrams) from labelled data.

Graph Classification Computational Geometry

A Topological Regularizer for Classifiers via Persistent Homology

no code implementations27 Jun 2018 Chao Chen, Xiuyan Ni, Qinxun Bai, Yusu Wang

In particular, our measurement of topological complexity incorporates the importance of topological features (e. g., connected components, handles, and so on) in a meaningful manner, and provides a direct control over spurious topological structures.

Graph Reconstruction by Discrete Morse Theory

1 code implementation14 Mar 2018 Tamal K. Dey, Jiayuan Wang, Yusu Wang

Specifically, first, leveraging existing theoretical understanding of persistence-guided discrete Morse cancellation, we provide a simplified version of the existing discrete Morse-based graph reconstruction algorithm.

Computational Geometry

Unperturbed: spectral analysis beyond Davis-Kahan

no code implementations20 Jun 2017 Justin Eldridge, Mikhail Belkin, Yusu Wang

Classical matrix perturbation results, such as Weyl's theorem for eigenvalues and the Davis-Kahan theorem for eigenvectors, are general purpose.


Beyond Hartigan Consistency: Merge Distortion Metric for Hierarchical Clustering

no code implementations21 Jun 2015 Justin Eldridge, Mikhail Belkin, Yusu Wang

In this paper we identify two limit properties, separation and minimality, which address both over-segmentation and improper nesting and together imply (but are not implied by) Hartigan consistency.


Learning with Fredholm Kernels

no code implementations NeurIPS 2014 Qichao Que, Mikhail Belkin, Yusu Wang

In this paper we propose a framework for supervised and semi-supervised learning based on reformulating the learning problem as a regularized Fredholm integral equation.

Data Skeletonization via Reeb Graphs

no code implementations NeurIPS 2011 Xiaoyin Ge, Issam I. Safa, Mikhail Belkin, Yusu Wang

While such data is often high-dimensional, it is of interest to approximate it with a low-dimensional or even one-dimensional space, since many important aspects of data are often intrinsically low-dimensional.

Cannot find the paper you are looking for? You can Submit a new open access paper.