Search Results for author: Joshua Hanson

Found 6 papers, 0 papers with code

Rademacher Complexity of Neural ODEs via Chen-Fliess Series

no code implementations30 Jan 2024 Joshua Hanson, Maxim Raginsky

In this net, the output "weights" are taken from the signature of the control input -- a tool used to represent infinite-dimensional paths as a sequence of tensors -- which comprises iterated integrals of the control input over a simplex.

Fitting an immersed submanifold to data via Sussmann's orbit theorem

no code implementations3 Apr 2022 Joshua Hanson, Maxim Raginsky

This paper describes an approach for fitting an immersed submanifold of a finite-dimensional Euclidean space to random samples.

Learning Recurrent Neural Net Models of Nonlinear Systems

no code implementations18 Nov 2020 Joshua Hanson, Maxim Raginsky, Eduardo Sontag

We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence.

Learning Compact Physics-Aware Delayed Photocurrent Models Using Dynamic Mode Decomposition

no code implementations27 Aug 2020 Joshua Hanson, Pavel Bochev, Biliana Paskaleva

Our results show that the significantly reduced order delayed photocurrent models obtained via this method accurately approximate the dynamics of the internal excess carrier density -- which can be used to calculate the induced current at the device boundaries -- while remaining compact enough to incorporate into larger circuit simulations.

Time Series Time Series Analysis

Universal Simulation of Stable Dynamical Systems by Recurrent Neural Nets

no code implementations L4DC 2020 Joshua Hanson, Maxim Raginsky

It is well-known that continuous-time recurrent neural nets are universal approximators for continuous-time dynamical systems.

Universal Approximation of Input-Output Maps by Temporal Convolutional Nets

no code implementations NeurIPS 2019 Joshua Hanson, Maxim Raginsky

There has been a recent shift in sequence-to-sequence modeling from recurrent network architectures to convolutional network architectures due to computational advantages in training and operation while still achieving competitive performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.