Search Results for author: David van Dijk

Found 21 papers, 12 papers with code

Operator Learning Meets Numerical Analysis: Improving Neural Networks through Iterative Methods

no code implementations2 Oct 2023 Emanuele Zappala, Daniel Levine, Sizhuang He, Syed Rizvi, Sacha Levy, David van Dijk

Deep neural networks, despite their success in numerous applications, often function without established theoretical foundations.

Operator learning

Continuous Spatiotemporal Transformers

1 code implementation31 Jan 2023 Antonio H. de O. Fonseca, Emanuele Zappala, Josue Ortega Caro, David van Dijk

Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning.

Neural Integral Equations

1 code implementation30 Sep 2022 Emanuele Zappala, Antonio Henrique de Oliveira Fonseca, Josue Ortega Caro, David van Dijk

In this paper, we introduce Neural Integral Equations (NIE), a method that learns an unknown integral operator from data through an IE solver.

Neural Integro-Differential Equations

1 code implementation28 Jun 2022 Emanuele Zappala, Antonio Henrique de Oliveira Fonseca, Andrew Henry Moberly, Michael James Higley, Chadi Abdallah, Jessica Cardin, David van Dijk

Further, we show that NIDE can decompose dynamics into their Markovian and non-Markovian constituents via the learned integral operator, which we test on fMRI brain activity recordings of people on ketamine.

Permutation invariant networks to learn Wasserstein metrics

1 code implementation NeurIPS Workshop TDA_and_Beyond 2020 Arijit Sehanobish, Neal Ravindra, David van Dijk

In this work, we use a permutation invariant network to map samples from probability measures into a low-dimensional space such that the Euclidean distance between the encoded samples reflects the Wasserstein distance between probability measures.

Self-supervised edge features for improved Graph Neural Network training

1 code implementation23 Jun 2020 Arijit Sehanobish, Neal G. Ravindra, David van Dijk

In recent years, there has been a lot of work incorporating edge features along with node features for prediction tasks.

General Classification Graph Attention +2

Gaining Insight into SARS-CoV-2 Infection and COVID-19 Severity Using Self-supervised Edge Features and Graph Neural Networks

1 code implementation23 Jun 2020 Arijit Sehanobish, Neal G. Ravindra, David van Dijk

A molecular and cellular understanding of how SARS-CoV-2 variably infects and causes severe COVID-19 remains a bottleneck in developing interventions to end the pandemic.

Explainable Artificial Intelligence (XAI) General Classification +3

Learning Potentials of Quantum Systems using Deep Neural Networks

1 code implementation23 Jun 2020 Arijit Sehanobish, Hector H. Corzo, Onur Kara, David van Dijk

Attempts to apply Neural Networks (NN) to a wide range of research problems have been ubiquitous and plentiful in recent literature.

Disease State Prediction From Single-Cell Data Using Graph Attention Networks

1 code implementation14 Feb 2020 Neal G. Ravindra, Arijit Sehanobish, Jenna L. Pappalardo, David A. Hafler, David van Dijk

To the best of our knowledge, this is the first effort to use graph attention, and deep learning in general, to predict disease state from single-cell data.

Disease Prediction Graph Attention +1

TrajectoryNet: A Dynamic Optimal Transport Network for Modeling Cellular Dynamics

2 code implementations ICML 2020 Alexander Tong, Jessie Huang, Guy Wolf, David van Dijk, Smita Krishnaswamy

To address this issue, we establish a link between continuous normalizing flows and dynamic optimal transport, that allows us to model the expected paths of points over time.

Beyond GANs: Transforming without a Target Distribution

no code implementations25 Sep 2019 Matthew Amodio, David van Dijk, Ruth Montgomery, Guy Wolf, Smita Krishnaswamy

While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets.

Generative Adversarial Network

Graph Spectral Regularization For Neural Network Interpretability

no code implementations ICLR 2019 Alexander Tong, David van Dijk, Jay Stanley, Guy Wolf, Smita Krishnaswamy

First, we show a synthetic example that the graph-structured layer can reveal topological features of the data.

Compressed Diffusion

no code implementations31 Jan 2019 Scott Gigante, Jay S. Stanley III, Ngan Vu, David van Dijk, Kevin Moon, Guy Wolf, Smita Krishnaswamy

Diffusion maps are a commonly used kernel-based method for manifold learning, which can reveal intrinsic structures in data and embed them in low dimensions.

Finding Archetypal Spaces Using Neural Networks

1 code implementation25 Jan 2019 David van Dijk, Daniel Burkhardt, Matthew Amodio, Alex Tong, Guy Wolf, Smita Krishnaswamy

Here, we propose a reformulation of the problem such that the goal is to learn a non-linear transformation of the data into a latent archetypal space.

Interpretable Neuron Structuring with Graph Spectral Regularization

1 code implementation ICLR 2019 Alexander Tong, David van Dijk, Jay S. Stanley III, Matthew Amodio, Kristina Yim, Rebecca Muhle, James Noonan, Guy Wolf, Smita Krishnaswamy

Taking inspiration from spatial organization and localization of neuron activations in biological networks, we use a graph Laplacian penalty to structure the activations within a layer.

Modeling Dynamics of Biological Systems with Deep Generative Neural Networks

no code implementations27 Sep 2018 Scott Gigante, David van Dijk, Kevin R. Moon, Alexander Strzalkowski, Katie Ferguson, Guy Wolf, Smita Krishnaswamy

DyMoN is well-suited to the idiosyncrasies of biological data, including noise, sparsity, and the lack of longitudinal measurements in many types of systems.

Dimensionality Reduction

Modeling Global Dynamics from Local Snapshots with Deep Generative Neural Networks

no code implementations10 Feb 2018 Scott Gigante, David van Dijk, Kevin Moon, Alexander Strzalkowski, Guy Wolf, Smita Krishnaswamy

In order to model the dynamics of such systems given snapshot data, or local transitions, we present a deep neural network framework we call Dynamics Modeling Network or DyMoN.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.