no code implementations • 30 Jan 2024 • Joshua Hanson, Maxim Raginsky
In this net, the output "weights" are taken from the signature of the control input -- a tool used to represent infinite-dimensional paths as a sequence of tensors -- which comprises iterated integrals of the control input over a simplex.
no code implementations • 3 Apr 2022 • Joshua Hanson, Maxim Raginsky
This paper describes an approach for fitting an immersed submanifold of a finite-dimensional Euclidean space to random samples.
no code implementations • 18 Nov 2020 • Joshua Hanson, Maxim Raginsky, Eduardo Sontag
We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence.
no code implementations • 27 Aug 2020 • Joshua Hanson, Pavel Bochev, Biliana Paskaleva
Our results show that the significantly reduced order delayed photocurrent models obtained via this method accurately approximate the dynamics of the internal excess carrier density -- which can be used to calculate the induced current at the device boundaries -- while remaining compact enough to incorporate into larger circuit simulations.
no code implementations • L4DC 2020 • Joshua Hanson, Maxim Raginsky
It is well-known that continuous-time recurrent neural nets are universal approximators for continuous-time dynamical systems.
no code implementations • NeurIPS 2019 • Joshua Hanson, Maxim Raginsky
There has been a recent shift in sequence-to-sequence modeling from recurrent network architectures to convolutional network architectures due to computational advantages in training and operation while still achieving competitive performance.