1 code implementation • 19 Nov 2019 • Jason A. Platt, Anna Miller, Lawson Fuller, Henry D. I. Abarbanel
We present a novel machine learning architecture for classification suggested by experiments on olfactory systems.
no code implementations • 4 Feb 2021 • Jason A. Platt, Adrian Wong, Randall Clark, Stephen G. Penny, Henry D. I. Abarbanel
Reservoir computers (RC) are a form of recurrent neural network (RNN) used for forecasting time series data.
no code implementations • 25 Sep 2021 • Stephen G. Penny, Timothy A. Smith, Tse-Chun Chen, Jason A. Platt, Hsin-Yi Lin, Michael Goodliff, Henry D. I. Abarbanel
The results indicate that these techniques can be applied to estimate the state of a system for the repeated initialization of short-term forecasts, even in the absence of a traditional numerical forecast model.
no code implementations • 13 Jan 2022 • Tse-Chun Chen, Stephen G. Penny, Timothy A. Smith, Jason A. Platt
Next generation reservoir computing based on nonlinear vector autoregression (NVAR) is applied to emulate simple dynamical system models and compared to numerical integration schemes such as Euler and the $2^\text{nd}$ order Runge-Kutta.
1 code implementation • 21 Jan 2022 • Jason A. Platt, Stephen G. Penny, Timothy A. Smith, Tse-Chun Chen, Henry D. I. Abarbanel
While we are not aware of a generally accepted best reported mean forecast time for different models in the literature, we report over a factor of 2 increase in the mean forecast time compared to the best performing RC model of Vlachas et. al (2020) for the 40 dimensional spatiotemporally chaotic Lorenz 1996 dynamics, and we are able to accomplish this using a smaller reservoir size.
1 code implementation • 24 Apr 2023 • Jason A. Platt, Stephen G. Penny, Timothy A. Smith, Tse-Chun Chen, Henry D. I. Abarbanel
Drawing on ergodic theory, we introduce a novel training method for machine learning based forecasting methods for chaotic dynamical systems.
no code implementations • 28 Apr 2023 • Timothy A. Smith, Stephen G. Penny, Jason A. Platt, Tse-Chun Chen
Future work is warranted to understand how the temporal resolution of training data affects other ML architectures.