no code implementations • ICML 2020 • Miles Lopes, N. Benjamin Erichson, Michael Mahoney
In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.
no code implementations • 4 Oct 2023 • Ilan Naiman, N. Benjamin Erichson, Pu Ren, Michael W. Mahoney, Omri Azencot
In this work, we introduce Koopman VAE (KVAE), a new generative framework that is based on a novel design for the model prior, and that can be optimized for either regular and irregular training data.
no code implementations • 2 Oct 2023 • Annan Yu, Arnur Nigmetov, Dmitriy Morozov, Michael W. Mahoney, N. Benjamin Erichson
An example is the structured state-space sequence (S4) layer, which uses the diagonal-plus-low-rank structure of the HiPPO initialization framework.
1 code implementation • 24 Jun 2023 • Pu Ren, N. Benjamin Erichson, Shashank Subramanian, Omer San, Zarija Lukic, Michael W. Mahoney
Super-Resolution (SR) techniques aim to enhance data resolution, enabling the retrieval of finer details, and improving the overall quality and fidelity of the data representation.
1 code implementation • 22 Feb 2023 • Junwen Yao, N. Benjamin Erichson, Miles E. Lopes
Three key advantages of this approach are: (1) The error estimates are specific to the problem at hand, avoiding the pessimism of worst-case bounds.
no code implementations • 1 Dec 2022 • N. Benjamin Erichson, Soon Hoe Lim, Michael W. Mahoney
We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies.
no code implementations • 17 Feb 2022 • Aditi S. Krishnapriyan, Alejandro F. Queiruga, N. Benjamin Erichson, Michael W. Mahoney
Dynamical systems that evolve continuously over time are ubiquitous throughout science and engineering.
no code implementations • 2 Feb 2022 • N. Benjamin Erichson, Soon Hoe Lim, Winnie Xu, Francisco Utrera, Ziang Cao, Michael W. Mahoney
For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic.
no code implementations • 26 Oct 2021 • Reese Pathak, Rajat Sen, Nikhil Rao, N. Benjamin Erichson, Michael I. Jordan, Inderjit S. Dhillon
Our framework -- which we refer to as "cluster-and-conquer" -- is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
1 code implementation • ICLR 2022 • T. Konstantin Rusch, Siddhartha Mishra, N. Benjamin Erichson, Michael W. Mahoney
We propose a novel method called Long Expressive Memory (LEM) for learning long-term sequential dependencies.
Ranked #1 on Time Series Classification on EigenWorms
2 code implementations • ICLR 2022 • Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data augmentation that combines the best of interpolation based training and noise injection schemes.
3 code implementations • NeurIPS 2021 • Alejandro Queiruga, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney
The recently-introduced class of ordinary differential equation networks (ODE-Nets) establishes a fruitful connection between deep learning and dynamical systems.
no code implementations • 18 Feb 2021 • Omri Azencot, N. Benjamin Erichson, Mirela Ben-Chen, Michael W. Mahoney
In this work, we employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.
1 code implementation • NeurIPS 2021 • Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney
We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states.
4 code implementations • 5 Aug 2020 • Alejandro F. Queiruga, N. Benjamin Erichson, Dane Taylor, Michael W. Mahoney
We first show that ResNets fail to be meaningful dynamical integrators in this richer sense.
no code implementations • 31 Jul 2020 • N. Benjamin Erichson, Dane Taylor, Qixuan Wu, Michael W. Mahoney
The ubiquity of deep neural networks (DNNs), cloud-based training, and transfer learning is giving rise to a new cybersecurity frontier in which unsecure DNNs have `structural malware' (i. e., compromised weights and activation pathways).
1 code implementation • ICLR 2021 • Francisco Utrera, Evan Kravitz, N. Benjamin Erichson, Rajiv Khanna, Michael W. Mahoney
Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
1 code implementation • ICLR 2021 • N. Benjamin Erichson, Omri Azencot, Alejandro Queiruga, Liam Hodgkinson, Michael W. Mahoney
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity.
Ranked #10 on Sequential Image Classification on Sequential CIFAR-10
no code implementations • 10 Mar 2020 • Miles E. Lopes, N. Benjamin Erichson, Michael W. Mahoney
In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.
1 code implementation • ICML 2020 • Omri Azencot, N. Benjamin Erichson, Vanessa Lin, Michael W. Mahoney
Recurrent neural networks are widely used on time series data, yet such models often ignore the underlying physical structures in such sequences.
no code implementations • 26 May 2019 • N. Benjamin Erichson, Michael Muehlebach, Michael W. Mahoney
In addition to providing high-profile successes in computer vision and natural language processing, neural networks also provide an emerging set of techniques for scientific problems.
1 code implementation • 7 Apr 2019 • N. Benjamin Erichson, Zhewei Yao, Michael W. Mahoney
To complement these approaches, we propose a very simple and inexpensive strategy which can be used to ``retrofit'' a previously-trained network to improve its resilience to adversarial attacks.
1 code implementation • 20 Feb 2019 • N. Benjamin Erichson, Lionel Mathelin, Zhewei Yao, Steven L. Brunton, Michael W. Mahoney, J. Nathan Kutz
In many applications, it is important to reconstruct a fluid flow field, or some other high-dimensional state, from limited measurements and limited data.
no code implementations • 28 Nov 2018 • Chen Gong, N. Benjamin Erichson, John P. Kelly, Laura Trutoiu, Brian T. Schowengerdt, Steven L. Brunton, Eric J. Seibel
To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas.
no code implementations • 1 Apr 2018 • N. Benjamin Erichson, Peng Zheng, Krithika Manohar, Steven L. Brunton, J. Nathan Kutz, Aleksandr Y. Aravkin
Sparse principal component analysis (SPCA) has emerged as a powerful technique for modern data analysis, providing improved interpretation of low-rank structures by identifying localized spatial structures in the data and disambiguating between distinct time scales.
no code implementations • 23 Feb 2018 • N. Benjamin Erichson, Lionel Mathelin, Steven L. Brunton, J. Nathan Kutz
Diffusion maps are an emerging data-driven technique for non-linear dimensionality reduction, which are especially useful for the analysis of coherent structures and nonlinear embeddings of dynamical systems.
2 code implementations • 6 Nov 2017 • N. Benjamin Erichson, Ariana Mendible, Sophie Wihlborn, J. Nathan Kutz
Nonnegative matrix factorization (NMF) is a powerful tool for data mining.
6 code implementations • 6 Aug 2016 • N. Benjamin Erichson, Sergey Voronin, Steven L. Brunton, J. Nathan Kutz
The essential idea of probabilistic algorithms is to employ some amount of randomness in order to derive a smaller matrix from a high-dimensional data matrix.
Computation Mathematical Software Methodology
no code implementations • 14 Dec 2015 • N. Benjamin Erichson, Steven L. Brunton, J. Nathan Kutz
We introduce the method of compressed dynamic mode decomposition (cDMD) for background modeling.
no code implementations • 11 Dec 2015 • N. Benjamin Erichson, Carl Donovan
This paper introduces a fast algorithm for randomized computation of a low-rank Dynamic Mode Decomposition (DMD) of a matrix.