no code implementations • ICML 2020 • Miles Lopes, N. Benjamin Erichson, Michael Mahoney
In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.
no code implementations • 13 May 2025 • Annan Yu, N. Benjamin Erichson
We prove that these changes equip the model with a better-suited inductive bias and improve its expressiveness and stability.
1 code implementation • 13 May 2025 • Krti Tallam, John Kevin Cava, Caleb Geniesse, N. Benjamin Erichson, Michael W. Mahoney
As AI-generated imagery becomes ubiquitous, invisible watermarks have emerged as a primary line of defense for copyright and provenance.
1 code implementation • 10 Feb 2025 • Kareem Hegazy, Michael W. Mahoney, N. Benjamin Erichson
Transformers have recently shown strong performance in time-series forecasting, but their all-to-all attention mechanism overlooks the (temporal) causal and often (temporally) local nature of data.
1 code implementation • 24 Jan 2025 • Yihan Wang, Lujun Zhang, Annan Yu, N. Benjamin Erichson, Tiantian Yang
The classical way of studying the rainfall-runoff processes in the water cycle relies on conceptual or physically-based hydrologic models.
1 code implementation • 1 Nov 2024 • Zhipeng Wei, Yuqi Liu, N. Benjamin Erichson
To exploit this bias in Judge LLMs, we introduce the Emoji Attack -- a method that places emojis within tokens to increase the embedding differences between sub-tokens and their originals.
no code implementations • 4 Oct 2024 • Soon Hoe Lim, Yijin Wang, Annan Yu, Emma Hart, Michael W. Mahoney, Xiaoye S. Li, N. Benjamin Erichson
Flow matching has recently emerged as a powerful paradigm for generative modeling and has been extended to probabilistic time series forecasting in latent spaces.
no code implementations • 2 Oct 2024 • Annan Yu, Dongwei Lyu, Soon Hoe Lim, Michael W. Mahoney, N. Benjamin Erichson
State space models (SSMs) leverage linear, time-invariant (LTI) systems to effectively learn sequences with long-range dependencies.
1 code implementation • 21 Jul 2024 • Pu Ren, Rie Nakata, Maxime Lacour, Ilan Naiman, Nori Nakata, Jialin Song, Zhengfa Bi, Osman Asif Malik, Dmitriy Morozov, Omri Azencot, N. Benjamin Erichson, Michael W. Mahoney
Predicting high-fidelity ground motions for future earthquakes is crucial for seismic hazard assessment and infrastructure resilience.
no code implementations • 30 May 2024 • Dongwei Lyu, Rie Nakata, Pu Ren, Michael W. Mahoney, Arben Pitarka, Nori Nakata, N. Benjamin Erichson
To improve early warning, we propose a novel AI-enabled framework, WaveCastNet, for forecasting ground motions from large earthquakes.
1 code implementation • 22 May 2024 • Annan Yu, Michael W. Mahoney, N. Benjamin Erichson
To achieve state-of-the-art performance, an SSM often needs a specifically designed initialization, and the training of state matrices is on a logarithmic scale with a very small learning rate.
1 code implementation • 4 Oct 2023 • Ilan Naiman, N. Benjamin Erichson, Pu Ren, Michael W. Mahoney, Omri Azencot
In this work, we introduce Koopman VAE (KoVAE), a new generative framework that is based on a novel design for the model prior, and that can be optimized for either regular and irregular training data.
no code implementations • 2 Oct 2023 • Annan Yu, Arnur Nigmetov, Dmitriy Morozov, Michael W. Mahoney, N. Benjamin Erichson
An example is the structured state-space sequence (S4) layer, which uses the diagonal-plus-low-rank structure of the HiPPO initialization framework.
1 code implementation • 24 Jun 2023 • Pu Ren, N. Benjamin Erichson, Junyi Guo, Shashank Subramanian, Omer San, Zarija Lukic, Michael W. Mahoney
Super-resolution (SR) techniques aim to enhance data resolution, enabling the retrieval of finer details, and improving the overall quality and fidelity of the data representation.
1 code implementation • 22 Feb 2023 • Junwen Yao, N. Benjamin Erichson, Miles E. Lopes
Three key advantages of this approach are: (1) The error estimates are specific to the problem at hand, avoiding the pessimism of worst-case bounds.
no code implementations • 1 Dec 2022 • N. Benjamin Erichson, Soon Hoe Lim, Michael W. Mahoney
In this paper, we present a novel approach to modeling long-term dependencies in sequential data by introducing a gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.
no code implementations • 17 Feb 2022 • Aditi S. Krishnapriyan, Alejandro F. Queiruga, N. Benjamin Erichson, Michael W. Mahoney
Dynamical systems that evolve continuously over time are ubiquitous throughout science and engineering.
no code implementations • 2 Feb 2022 • N. Benjamin Erichson, Soon Hoe Lim, Winnie Xu, Francisco Utrera, Ziang Cao, Michael W. Mahoney
For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic.
no code implementations • 26 Oct 2021 • Reese Pathak, Rajat Sen, Nikhil Rao, N. Benjamin Erichson, Michael I. Jordan, Inderjit S. Dhillon
Our framework -- which we refer to as "cluster-and-conquer" -- is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
1 code implementation • ICLR 2022 • T. Konstantin Rusch, Siddhartha Mishra, N. Benjamin Erichson, Michael W. Mahoney
We propose a novel method called Long Expressive Memory (LEM) for learning long-term sequential dependencies.
Ranked #1 on
Time Series Classification
on EigenWorms
2 code implementations • ICLR 2022 • Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data augmentation that combines the best of interpolation based training and noise injection schemes.
3 code implementations • NeurIPS 2021 • Alejandro Queiruga, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney
The recently-introduced class of ordinary differential equation networks (ODE-Nets) establishes a fruitful connection between deep learning and dynamical systems.
no code implementations • 18 Feb 2021 • Omri Azencot, N. Benjamin Erichson, Mirela Ben-Chen, Michael W. Mahoney
In this work, we employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.
1 code implementation • NeurIPS 2021 • Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney
We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states.
4 code implementations • 5 Aug 2020 • Alejandro F. Queiruga, N. Benjamin Erichson, Dane Taylor, Michael W. Mahoney
We first show that ResNets fail to be meaningful dynamical integrators in this richer sense.
no code implementations • 31 Jul 2020 • N. Benjamin Erichson, Dane Taylor, Qixuan Wu, Michael W. Mahoney
The ubiquity of deep neural networks (DNNs), cloud-based training, and transfer learning is giving rise to a new cybersecurity frontier in which unsecure DNNs have `structural malware' (i. e., compromised weights and activation pathways).
1 code implementation • ICLR 2021 • Francisco Utrera, Evan Kravitz, N. Benjamin Erichson, Rajiv Khanna, Michael W. Mahoney
Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
1 code implementation • ICLR 2021 • N. Benjamin Erichson, Omri Azencot, Alejandro Queiruga, Liam Hodgkinson, Michael W. Mahoney
Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity.
no code implementations • 10 Mar 2020 • Miles E. Lopes, N. Benjamin Erichson, Michael W. Mahoney
In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.
1 code implementation • ICML 2020 • Omri Azencot, N. Benjamin Erichson, Vanessa Lin, Michael W. Mahoney
Recurrent neural networks are widely used on time series data, yet such models often ignore the underlying physical structures in such sequences.
no code implementations • 26 May 2019 • N. Benjamin Erichson, Michael Muehlebach, Michael W. Mahoney
In addition to providing high-profile successes in computer vision and natural language processing, neural networks also provide an emerging set of techniques for scientific problems.
1 code implementation • 7 Apr 2019 • N. Benjamin Erichson, Zhewei Yao, Michael W. Mahoney
To complement these approaches, we propose a very simple and inexpensive strategy which can be used to ``retrofit'' a previously-trained network to improve its resilience to adversarial attacks.
1 code implementation • 20 Feb 2019 • N. Benjamin Erichson, Lionel Mathelin, Zhewei Yao, Steven L. Brunton, Michael W. Mahoney, J. Nathan Kutz
In many applications, it is important to reconstruct a fluid flow field, or some other high-dimensional state, from limited measurements and limited data.
no code implementations • 28 Nov 2018 • Chen Gong, N. Benjamin Erichson, John P. Kelly, Laura Trutoiu, Brian T. Schowengerdt, Steven L. Brunton, Eric J. Seibel
To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas.
no code implementations • 1 Apr 2018 • N. Benjamin Erichson, Peng Zheng, Krithika Manohar, Steven L. Brunton, J. Nathan Kutz, Aleksandr Y. Aravkin
Sparse principal component analysis (SPCA) has emerged as a powerful technique for modern data analysis, providing improved interpretation of low-rank structures by identifying localized spatial structures in the data and disambiguating between distinct time scales.
no code implementations • 23 Feb 2018 • N. Benjamin Erichson, Lionel Mathelin, Steven L. Brunton, J. Nathan Kutz
Diffusion maps are an emerging data-driven technique for non-linear dimensionality reduction, which are especially useful for the analysis of coherent structures and nonlinear embeddings of dynamical systems.
2 code implementations • 6 Nov 2017 • N. Benjamin Erichson, Ariana Mendible, Sophie Wihlborn, J. Nathan Kutz
Nonnegative matrix factorization (NMF) is a powerful tool for data mining.
6 code implementations • 6 Aug 2016 • N. Benjamin Erichson, Sergey Voronin, Steven L. Brunton, J. Nathan Kutz
The essential idea of probabilistic algorithms is to employ some amount of randomness in order to derive a smaller matrix from a high-dimensional data matrix.
Computation Mathematical Software Methodology
no code implementations • 14 Dec 2015 • N. Benjamin Erichson, Steven L. Brunton, J. Nathan Kutz
We introduce the method of compressed dynamic mode decomposition (cDMD) for background modeling.
no code implementations • 11 Dec 2015 • N. Benjamin Erichson, Carl Donovan
This paper introduces a fast algorithm for randomized computation of a low-rank Dynamic Mode Decomposition (DMD) of a matrix.