You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 22 Feb 2024 • Mitchell Black, Zhengchao Wan, Gal Mishne, Amir Nayyeri, Yusu Wang

The distinguishing power of graph transformers is closely tied to the choice of positional encoding: features used to augment the base transformer with information about the graph.

no code implementations • 14 Feb 2024 • Theodore Papamarkou, Tolga Birdal, Michael Bronstein, Gunnar Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Liò, Paolo Di Lorenzo, Vasileios Maroulas, Nina Miolane, Farzana Nasrin, Karthikeyan Natesan Ramamurthy, Bastian Rieck, Simone Scardapane, Michael T. Schaub, Petar Veličković, Bei Wang, Yusu Wang, Guo-Wei Wei, Ghada Zamzmi

Topological deep learning (TDL) is a rapidly evolving field that uses topological features to understand and design deep learning models.

no code implementations • 17 Dec 2023 • Andrew B. Kahng, Robert R. Nerem, Yusu Wang, Chien-Yi Yang

On the methodology front, we propose NN-Steiner, which is a novel mixed neural-algorithmic framework for computing RSMTs that leverages the celebrated PTAS algorithmic framework of Arora to solve this problem (and other geometric optimization problems).

1 code implementation • 24 Nov 2023 • Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, Chao Chen, Yusu Wang

To efficiently encode the space of all cycles, we start with a cycle basis (i. e., a minimal set of cycles generating the cycle space) which we compute via the kernel of the 1-dimensional Hodge Laplacian of the input graph.

no code implementations • 20 Oct 2023 • Puoya Tabaghi, Yusu Wang

Restricting the domain of the functions to finite multisets of $D$-dimensional vectors, Deep Sets also provides a \emph{universal approximation} that requires a latent space dimension of $O(N^D)$ -- where $N$ is an upper bound on the size of input multisets.

1 code implementation • 16 Feb 2023 • Tristan Brugère, Zhengchao Wan, Yusu Wang

Recently, in the graph learning and optimization communities, a range of new approaches have been developed for comparing graphs with node attributes, leveraging ideas such as the Optimal Transport (OT) and the Weisfeiler-Lehman (WL) graph isomorphism test.

1 code implementation • 14 Feb 2023 • Mitchell Black, Zhengchao Wan, Amir Nayyeri, Yusu Wang

We propose to use total effective resistance as a bound of the total amount of oversquashing in a graph and provide theoretical justification for its use.

no code implementations • 1 Feb 2023 • Samantha Chen, Sunhyuk Lim, Facundo Mémoli, Zhengchao Wan, Yusu Wang

This new interpretation connects the WL distance to the literature on distances for stochastic processes, which also makes the interpretation of the distance more accessible and intuitive.

1 code implementation • 27 Jan 2023 • Chen Cai, Truong Son Hy, Rose Yu, Yusu Wang

Graph Transformer (GT) recently has emerged as a new paradigm of graph learning algorithms, outperforming the previously popular Message Passing Neural Network (MPNN) on multiple benchmarks.

Ranked #8 on Node Classification on PascalVOC-SP

1 code implementation • 6 Jan 2023 • Puoya Tabaghi, Michael Khanzadeh, Yusu Wang, Sivash Mirarab

Finding a low-dimensional Riemannian affine subspace for a set of points in a space form amounts to dimensionality reduction because, as we show, any such affine subspace is isometric to a space form of the same dimension and curvature.

1 code implementation • 7 Nov 2022 • Xinyue Xia, Gal Mishne, Yusu Wang

We also show that our model is suitable for graph representation learning and graph generation.

1 code implementation • 31 Oct 2022 • Gal Mishne, Zhengchao Wan, Yusu Wang, Sheng Yang

Given the exponential growth of the volume of the ball w. r. t.

1 code implementation • 21 Oct 2022 • Samantha Chen, Puoya Tabaghi, Yusu Wang

For measures supported in discrete metric spaces, finding the optimal transport distance has cubic time complexity in the size of the space.

no code implementations • 6 Jun 2022 • Yikai Zhang, Jiachen Yao, Yusu Wang, Chao Chen

Topological loss based on persistent homology has shown promise in various applications.

no code implementations • 5 Feb 2022 • Samantha Chen, Sunhyuk Lim, Facundo Mémoli, Zhengchao Wan, Yusu Wang

The WL distance is polynomial time computable and is also compatible with the WL test in the sense that the former is positive if and only if the WL test can distinguish the two involved graphs.

1 code implementation • 28 Jan 2022 • Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, Yusu Wang, Chao Chen

Topological features based on persistent homology capture high-order structural information so as to augment graph neural network methods.

1 code implementation • 28 Jan 2022 • Wujie Wang, Minkai Xu, Chen Cai, Benjamin Kurt Miller, Tess Smidt, Yusu Wang, Jian Tang, Rafael Gómez-Bombarelli

Coarse-graining (CG) of molecular simulations simplifies the particle representation by grouping selected atoms into pseudo-beads and drastically accelerates simulation.

no code implementations • 25 Jan 2022 • Chen Cai, Yusu Wang

Building upon this result, we prove the convergence of $k$-IGN under the model of \citet{ruiz2020graphon}, where we access the edge weight but the convergence error is measured for graphon inputs.

no code implementations • NeurIPS 2021 • Evan McCarty, Qi Zhao, Anastasios Sidiropoulos, Yusu Wang

This leads to a mixed algorithmic-ML framework, which we call NN-Baker that has the capacity to approximately solve a family of graph optimization problems (e. g, maximum independent set and minimum vertex cover) in time linear to input graph size, and only polynomial to approximation parameter.

no code implementations • 12 Apr 2021 • Chen Cai, Nikolaos Vlassis, Lucas Magee, Ran Ma, Zeyu Xiong, Bahador Bahmani, Teng-Fong Wong, Yusu Wang, WaiChing Sun

Comparisons among predictions inferred from training the CNN and those from graph convolutional neural networks (GNN) with and without the equivariant constraint indicate that the equivariant graph neural network seems to perform better than the CNN and GNN without enforcing equivariant constraints.

no code implementations • ICLR 2021 • Xiaoling Hu, Yusu Wang, Li Fuxin, Dimitris Samaras, Chao Chen

In the segmentation of fine-scale structures from natural and biomedical images, per-pixel accuracy is not the only metric of concern.

1 code implementation • ICLR 2021 • Chen Cai, Dingkang Wang, Yusu Wang

As large-scale graphs become increasingly more prevalent, it poses significant computational challenges to process, extract and analyze large graph data.

1 code implementation • 23 Jun 2020 • Chen Cai, Yusu Wang

In this paper, we build upon previous results \cite{oono2019graph} to further analyze the over-smoothing effect in the general graph neural network architecture.

no code implementations • 20 Mar 2020 • Dingkang Wang, Lucas Magee, Bing-Xing Huo, Samik Banerjee, Xu Li, Jaikishan Jayakumar, Meng Kuan Lin, Keerthi Ram, Suyi Wang, Yusu Wang, Partha P. Mitra

Neuroscientific data analysis has traditionally relied on linear algebra and stochastic process theory.

no code implementations • 16 Jan 2020 • Chen Cai, Yusu Wang

For shape segmentation and classification, however, we note that persistence pairing shows significant power on most of the benchmark datasets, and improves over both summaries based on merely critical values, and those based on permutation tests.

no code implementations • 15 Sep 2019 • Tamal K. Dey, Jiayuan Wang, Yusu Wang

Next, in a fully automatic framework, we leverage the power of the discrete-Morse based graph reconstruction algorithm to train a CNN from a collection of images without labelled data and use the same algorithm to produce the final output from the segmented images created by the trained CNN.

1 code implementation • NeurIPS 2019 • Qi Zhao, Yusu Wang

However often in practice, the choice of the weight function should depend on the nature of the specific type of data one considers, and it is thus highly desirable to learn a best weight function (and thus metric for persistence diagrams) from labelled data.

Ranked #1 on Graph Classification on NCI109

Graph Classification Computational Geometry

2 code implementations • 8 Nov 2018 • Chen Cai, Yusu Wang

We test our baseline representation for the graph classification task on a range of graph datasets.

Ranked #22 on Graph Classification on MUTAG

no code implementations • 27 Jun 2018 • Chao Chen, Xiuyan Ni, Qinxun Bai, Yusu Wang

In particular, our measurement of topological complexity incorporates the importance of topological features (e. g., connected components, handles, and so on) in a meaningful manner, and provides a direct control over spurious topological structures.

1 code implementation • 14 Mar 2018 • Tamal K. Dey, Jiayuan Wang, Yusu Wang

Specifically, first, leveraging existing theoretical understanding of persistence-guided discrete Morse cancellation, we provide a simplified version of the existing discrete Morse-based graph reconstruction algorithm.

Computational Geometry

no code implementations • ICML 2017 • Xiuyan Ni, Novi Quadrianto, Yusu Wang, Chao Chen

Clustering data with both continuous and discrete attributes is a challenging task.

no code implementations • 20 Jun 2017 • Justin Eldridge, Mikhail Belkin, Yusu Wang

Classical matrix perturbation results, such as Weyl's theorem for eigenvalues and the Davis-Kahan theorem for eigenvectors, are general purpose.

no code implementations • NeurIPS 2016 • Justin Eldridge, Mikhail Belkin, Yusu Wang

In this work we develop a theory of hierarchical clustering for graphs.

no code implementations • 21 Jun 2015 • Justin Eldridge, Mikhail Belkin, Yusu Wang

In this paper we identify two limit properties, separation and minimality, which address both over-segmentation and improper nesting and together imply (but are not implied by) Hartigan consistency.

no code implementations • NeurIPS 2014 • Qichao Que, Mikhail Belkin, Yusu Wang

In this paper we propose a framework for supervised and semi-supervised learning based on reformulating the learning problem as a regularized Fredholm integral equation.

no code implementations • NeurIPS 2011 • Xiaoyin Ge, Issam I. Safa, Mikhail Belkin, Yusu Wang

While such data is often high-dimensional, it is of interest to approximate it with a low-dimensional or even one-dimensional space, since many important aspects of data are often intrinsically low-dimensional.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.