Search Results for author: N. Benjamin Erichson

Found 30 papers, 14 papers with code

Error Estimation for Sketched SVD

no code implementations ICML 2020 Miles Lopes, N. Benjamin Erichson, Michael Mahoney

In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.

Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs

no code implementations4 Oct 2023 Ilan Naiman, N. Benjamin Erichson, Pu Ren, Michael W. Mahoney, Omri Azencot

In this work, we introduce Koopman VAE (KVAE), a new generative framework that is based on a novel design for the model prior, and that can be optimized for either regular and irregular training data.

Irregular Time Series Time Series +1

Robustifying State-space Models for Long Sequences via Approximate Diagonalization

no code implementations2 Oct 2023 Annan Yu, Arnur Nigmetov, Dmitriy Morozov, Michael W. Mahoney, N. Benjamin Erichson

An example is the structured state-space sequence (S4) layer, which uses the diagonal-plus-low-rank structure of the HiPPO initialization framework.

Computational Efficiency

SuperBench: A Super-Resolution Benchmark Dataset for Scientific Machine Learning

1 code implementation24 Jun 2023 Pu Ren, N. Benjamin Erichson, Shashank Subramanian, Omer San, Zarija Lukic, Michael W. Mahoney

Super-Resolution (SR) techniques aim to enhance data resolution, enabling the retrieval of finer details, and improving the overall quality and fidelity of the data representation.

Retrieval Super-Resolution

Error Estimation for Random Fourier Features

1 code implementation22 Feb 2023 Junwen Yao, N. Benjamin Erichson, Miles E. Lopes

Three key advantages of this approach are: (1) The error estimates are specific to the problem at hand, avoiding the pessimism of worst-case bounds.

Gated Recurrent Neural Networks with Weighted Time-Delay Feedback

no code implementations1 Dec 2022 N. Benjamin Erichson, Soon Hoe Lim, Michael W. Mahoney

We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies.

Human Activity Recognition speech-recognition +4

Learning continuous models for continuous physics

no code implementations17 Feb 2022 Aditi S. Krishnapriyan, Alejandro F. Queiruga, N. Benjamin Erichson, Michael W. Mahoney

Dynamical systems that evolve continuously over time are ubiquitous throughout science and engineering.

NoisyMix: Boosting Model Robustness to Common Corruptions

no code implementations2 Feb 2022 N. Benjamin Erichson, Soon Hoe Lim, Winnie Xu, Francisco Utrera, Ziang Cao, Michael W. Mahoney

For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic.

Data Augmentation

Cluster-and-Conquer: A Framework For Time-Series Forecasting

no code implementations26 Oct 2021 Reese Pathak, Rajat Sen, Nikhil Rao, N. Benjamin Erichson, Michael I. Jordan, Inderjit S. Dhillon

Our framework -- which we refer to as "cluster-and-conquer" -- is highly general, allowing for any time-series forecasting and clustering method to be used in each step.

Time Series Time Series Forecasting

Noisy Feature Mixup

2 code implementations ICLR 2022 Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney

We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data augmentation that combines the best of interpolation based training and noise injection schemes.

Data Augmentation

Stateful ODE-Nets using Basis Function Expansions

3 code implementations NeurIPS 2021 Alejandro Queiruga, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney

The recently-introduced class of ordinary differential equation networks (ODE-Nets) establishes a fruitful connection between deep learning and dynamical systems.

Image Classification Sentence

A Differential Geometry Perspective on Orthogonal Recurrent Models

no code implementations18 Feb 2021 Omri Azencot, N. Benjamin Erichson, Mirela Ben-Chen, Michael W. Mahoney

In this work, we employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.

Noisy Recurrent Neural Networks

1 code implementation NeurIPS 2021 Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney

We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states.

General Classification

Continuous-in-Depth Neural Networks

4 code implementations5 Aug 2020 Alejandro F. Queiruga, N. Benjamin Erichson, Dane Taylor, Michael W. Mahoney

We first show that ResNets fail to be meaningful dynamical integrators in this richer sense.

Numerical Integration

Noise-Response Analysis of Deep Neural Networks Quantifies Robustness and Fingerprints Structural Malware

no code implementations31 Jul 2020 N. Benjamin Erichson, Dane Taylor, Qixuan Wu, Michael W. Mahoney

The ubiquity of deep neural networks (DNNs), cloud-based training, and transfer learning is giving rise to a new cybersecurity frontier in which unsecure DNNs have `structural malware' (i. e., compromised weights and activation pathways).

Transfer Learning

Lipschitz Recurrent Neural Networks

1 code implementation ICLR 2021 N. Benjamin Erichson, Omri Azencot, Alejandro Queiruga, Liam Hodgkinson, Michael W. Mahoney

Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity.

Language Modelling Sequential Image Classification

Error Estimation for Sketched SVD via the Bootstrap

no code implementations10 Mar 2020 Miles E. Lopes, N. Benjamin Erichson, Michael W. Mahoney

In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.

Forecasting Sequential Data using Consistent Koopman Autoencoders

1 code implementation ICML 2020 Omri Azencot, N. Benjamin Erichson, Vanessa Lin, Michael W. Mahoney

Recurrent neural networks are widely used on time series data, yet such models often ignore the underlying physical structures in such sequences.

Time Series Time Series Analysis

Physics-informed Autoencoders for Lyapunov-stable Fluid Flow Prediction

no code implementations26 May 2019 N. Benjamin Erichson, Michael Muehlebach, Michael W. Mahoney

In addition to providing high-profile successes in computer vision and natural language processing, neural networks also provide an emerging set of techniques for scientific problems.

JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks

1 code implementation7 Apr 2019 N. Benjamin Erichson, Zhewei Yao, Michael W. Mahoney

To complement these approaches, we propose a very simple and inexpensive strategy which can be used to ``retrofit'' a previously-trained network to improve its resilience to adversarial attacks.

Shallow Neural Networks for Fluid Flow Reconstruction with Limited Sensors

1 code implementation20 Feb 2019 N. Benjamin Erichson, Lionel Mathelin, Zhewei Yao, Steven L. Brunton, Michael W. Mahoney, J. Nathan Kutz

In many applications, it is important to reconstruct a fluid flow field, or some other high-dimensional state, from limited measurements and limited data.

RetinaMatch: Efficient Template Matching of Retina Images for Teleophthalmology

no code implementations28 Nov 2018 Chen Gong, N. Benjamin Erichson, John P. Kelly, Laura Trutoiu, Brian T. Schowengerdt, Steven L. Brunton, Eric J. Seibel

To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas.

Dimensionality Reduction Mixed Reality +1

Sparse Principal Component Analysis via Variable Projection

no code implementations1 Apr 2018 N. Benjamin Erichson, Peng Zheng, Krithika Manohar, Steven L. Brunton, J. Nathan Kutz, Aleksandr Y. Aravkin

Sparse principal component analysis (SPCA) has emerged as a powerful technique for modern data analysis, providing improved interpretation of low-rank structures by identifying localized spatial structures in the data and disambiguating between distinct time scales.

Computational Efficiency

Diffusion Maps meet Nyström

no code implementations23 Feb 2018 N. Benjamin Erichson, Lionel Mathelin, Steven L. Brunton, J. Nathan Kutz

Diffusion maps are an emerging data-driven technique for non-linear dimensionality reduction, which are especially useful for the analysis of coherent structures and nonlinear embeddings of dynamical systems.

Dimensionality Reduction Time Series +1

Randomized Nonnegative Matrix Factorization

2 code implementations6 Nov 2017 N. Benjamin Erichson, Ariana Mendible, Sophie Wihlborn, J. Nathan Kutz

Nonnegative matrix factorization (NMF) is a powerful tool for data mining.

Randomized Matrix Decompositions using R

6 code implementations6 Aug 2016 N. Benjamin Erichson, Sergey Voronin, Steven L. Brunton, J. Nathan Kutz

The essential idea of probabilistic algorithms is to employ some amount of randomness in order to derive a smaller matrix from a high-dimensional data matrix.

Computation Mathematical Software Methodology

Compressed Dynamic Mode Decomposition for Background Modeling

no code implementations14 Dec 2015 N. Benjamin Erichson, Steven L. Brunton, J. Nathan Kutz

We introduce the method of compressed dynamic mode decomposition (cDMD) for background modeling.

Computational Efficiency

Randomized Low-Rank Dynamic Mode Decomposition for Motion Detection

no code implementations11 Dec 2015 N. Benjamin Erichson, Carl Donovan

This paper introduces a fast algorithm for randomized computation of a low-rank Dynamic Mode Decomposition (DMD) of a matrix.

Motion Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.