no code implementations • 27 Nov 2023 • Sam Leone, Xingzhi Sun, Michael Perlmutter, Smita Krishnaswamy
In particular, we present algorithms for the cases where the signal is perturbed by Gaussian noise, dropout, and uniformly distributed noise.
no code implementations • 26 Oct 2023 • Charles Xu, Laney Goldman, Valentina Guo, Benjamin Hollander-Bodie, Maedee Trank-Greene, Ian Adelstein, Edward De Brouwer, Rex Ying, Smita Krishnaswamy, Michael Perlmutter
We make several crucial changes to the original geometric scattering architecture which we prove increase the ability of our network to capture information about the input signal and show that BLIS-Net achieves superior performance on both synthetic and real-world data sets based on traffic flow and fMRI data.
no code implementations • 18 Sep 2023 • Dhananjay Bhaskar, Yanlei Zhang, Charles Xu, Xingzhi Sun, Oluwadamilola Fasina, Guy Wolf, Maximilian Nickel, Michael Perlmutter, Smita Krishnaswamy
In this paper, we propose Graph Differential Equation Network (GDeNet), an approach that harnesses the expressive power of solutions to PDEs on a graph to obtain continuous node- and graph-level representations for various downstream tasks.
no code implementations • 14 Sep 2023 • Aarthi Venkat, Joyce Chew, Ferran Cardoso Rodriguez, Christopher J. Tape, Michael Perlmutter, Smita Krishnaswamy
We show this method outperforms numerous others on tasks such as embedding directed graphs and learning cellular signaling networks.
no code implementations • 31 Jul 2023 • Kincaid MacDonald, Dhananjay Bhaskar, Guy Thampakkul, Nhi Nguyen, Joia Zhang, Michael Perlmutter, Ian Adelstein, Smita Krishnaswamy
Existing embedding techniques either do not utilize velocity information or embed the coordinates and velocities independently, i. e., they either impose velocities on top of an existing point embedding or embed points within a prescribed vector field.
1 code implementation • 8 Jul 2023 • Joyce Chew, Edward De Brouwer, Smita Krishnaswamy, Deanna Needell, Michael Perlmutter
We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs), that aims to further our understanding of MNNs, analogous to how the aggregate-combine framework helps with the understanding of graph neural networks (GNNs).
no code implementations • 23 Dec 2022 • Joyce Chew, Deanna Needell, Michael Perlmutter
Moreover, in this work, the authors provide a numerical scheme for implementing such neural networks when the manifold is unknown and one only has access to finitely many sample points.
no code implementations • 17 Aug 2022 • Joyce Chew, Matthew Hirn, Smita Krishnaswamy, Deanna Needell, Michael Perlmutter, Holly Steach, Siddharth Viswanath, Hau-Tieng Wu
Our proposed framework includes previous work on geometric scattering as special cases but also applies to more general settings such as directed graphs, signed graphs, and manifolds with boundary.
no code implementations • 15 Aug 2022 • Alexander Tong, Frederik Wenkel, Dhananjay Bhaskar, Kincaid MacDonald, Jackson Grady, Michael Perlmutter, Smita Krishnaswamy, Guy Wolf
We propose a new graph neural network (GNN) module, based on relaxations of recently proposed geometric scattering transforms, which consist of a cascade of graph wavelet filters.
1 code implementation • 21 Jun 2022 • Joyce Chew, Holly R. Steach, Siddharth Viswanath, Hau-Tieng Wu, Matthew Hirn, Deanna Needell, Smita Krishnaswamy, Michael Perlmutter
The manifold scattering transform is a deep feature extractor for data defined on a Riemannian manifold.
1 code implementation • 15 Jun 2022 • Renming Liu, Semih Cantürk, Frederik Wenkel, Sarah McGuire, Xinyi Wang, Anna Little, Leslie O'Bray, Michael Perlmutter, Bastian Rieck, Matthew Hirn, Guy Wolf, Ladislav Rampášek
Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry.
1 code implementation • 3 Jun 2022 • Yimeng Min, Frederik Wenkel, Michael Perlmutter, Guy Wolf
We propose a geometric scattering-based graph neural network (GNN) for approximating solutions of the NP-hard maximum clique (MC) problem.
no code implementations • 22 Jan 2022 • Frederik Wenkel, Yimeng Min, Matthew Hirn, Michael Perlmutter, Guy Wolf
We further introduce an attention framework that allows the model to locally attend over combined information from different filters at the node level.
no code implementations • 27 Oct 2021 • Renming Liu, Semih Cantürk, Frederik Wenkel, Dylan Sandfelder, Devin Kreuzer, Anna Little, Sarah McGuire, Leslie O'Bray, Michael Perlmutter, Bastian Rieck, Matthew Hirn, Guy Wolf, Ladislav Rampášek
Graph neural networks (GNNs) have attracted much attention due to their ability to leverage the intrinsic geometries of the underlying data.
no code implementations • 10 Oct 2021 • Michael Perlmutter, Jieqian He, Mark Iwen, Matthew Hirn
We also show that the Gabor measurements used in the second layer can be used to synthesize sparse signals such as those produced by the first layer.
1 code implementation • 7 Oct 2021 • Andrew Sack, Wenzhao Jiang, Michael Perlmutter, Palina Salanevich, Deanna Needell
We propose a method for noise reduction, the task of producing a clean audio signal from a recording corrupted by additive noise.
1 code implementation • NeurIPS 2021 • Xitong Zhang, Yixuan He, Nathan Brugnone, Michael Perlmutter, Matthew Hirn
In this paper, we propose MagNet, a spectral GNN for directed graphs based on a complex Hermitian matrix known as the magnetic Laplacian.
1 code implementation • 14 Nov 2019 • Michael Perlmutter, Alexander Tong, Feng Gao, Guy Wolf, Matthew Hirn
As a result, the proposed construction unifies and extends known theoretical results for many of the existing graph scattering architectures.
no code implementations • 24 May 2019 • Michael Perlmutter, Feng Gao, Guy Wolf, Matthew Hirn
The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of convolutional neural networks.
no code implementations • 10 Feb 2019 • Michael Perlmutter, Jieqian He, Matthew Hirn
We present a machine learning model for the analysis of randomly generated discrete signals, modeled as the points of an inhomogeneous, compound Poisson point process.
no code implementations • 15 Dec 2018 • Michael Perlmutter, Guy Wolf, Matthew Hirn
The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of the success of convolutional neural networks (ConvNets) in image data analysis and other tasks.