Search Results for author: Hayden Schaeffer

Found 17 papers, 8 papers with code

Towards a Foundation Model for Partial Differential Equations: Multi-Operator Learning and Extrapolation

1 code implementation18 Apr 2024 Jingmin Sun, Yuxuan Liu, Zecheng Zhang, Hayden Schaeffer

More importantly, we provide three extrapolation studies to demonstrate that PROSE-PDE can generalize physical features through the robust training of multiple operators and that the proposed model can extrapolate to predict PDE solutions whose models or data were unseen during the training.

Operator learning

D2NO: Efficient Handling of Heterogeneous Input Function Spaces with Distributed Deep Neural Operators

no code implementations29 Oct 2023 Zecheng Zhang, Christian Moya, Lu Lu, Guang Lin, Hayden Schaeffer

Neural operators have been applied in various scientific fields, such as solving parametric partial differential equations, dynamical systems with control, and inverse problems.

PROSE: Predicting Operators and Symbolic Expressions using Multimodal Transformers

1 code implementation28 Sep 2023 Yuxuan Liu, Zecheng Zhang, Hayden Schaeffer

Approximating nonlinear differential equations using a neural network provides a robust and efficient tool for various scientific computing tasks, including real-time predictions, inverse problems, optimal controls, and surrogate modeling.

Random Feature Models for Learning Interacting Dynamical Systems

1 code implementation11 Dec 2022 Yuxuan Liu, Scott G. McCalla, Hayden Schaeffer

Particle dynamics and multi-agent systems provide accurate dynamical models for studying and forecasting the behavior of complex interacting systems.

Concentration of Random Feature Matrices in High-Dimensions

no code implementations14 Apr 2022 Zhijun Chen, Hayden Schaeffer, Rachel Ward

The spectra of random feature matrices provide essential information on the conditioning of the linear system used in random feature regression problems and are thus connected to the consistency and generalization of random feature models.

Vocal Bursts Intensity Prediction

SRMD: Sparse Random Mode Decomposition

1 code implementation12 Apr 2022 Nicholas Richardson, Hayden Schaeffer, Giang Tran

Signal decomposition and multiscale signal analysis provide many useful tools for time-frequency analysis.

Time Series Time Series Analysis

HARFE: Hard-Ridge Random Feature Expansion

1 code implementation6 Feb 2022 Esha Saha, Hayden Schaeffer, Giang Tran

We prove that the HARFE method is guaranteed to converge with a given error bound depending on the noise and the parameters of the sparse ridge regression model.

regression

SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning

1 code implementation7 Dec 2021 Yuege Xie, Bobby Shi, Hayden Schaeffer, Rachel Ward

Inspired by the success of the iterative magnitude pruning technique in finding lottery tickets of neural networks, we propose a new method -- Sparser Random Feature Models via IMP (ShRIMP) -- to efficiently fit high-dimensional data with inherent low-dimensional structure in the form of sparse variable dependencies.

Additive models Computational Efficiency +1

Conditioning of Random Feature Matrices: Double Descent and Generalization Error

no code implementations21 Oct 2021 Zhijun Chen, Hayden Schaeffer

In particular, we show that if the complexity ratio $\frac{N}{m}$ where $N$ is the number of neurons and $m$ is the number of data samples scales like $\log^{-1}(N)$ or $\log(m)$, then the random feature matrix is well-conditioned.

regression

Generalization Bounds for Sparse Random Feature Expansions

2 code implementations4 Mar 2021 Abolfazl Hashemi, Hayden Schaeffer, Robert Shi, Ufuk Topcu, Giang Tran, Rachel Ward

In particular, we provide generalization bounds for functions in a certain class (that is dense in a reproducing kernel Hilbert space) depending on the number of samples and the distribution of features.

BIG-bench Machine Learning Compressive Sensing +1

Reduced Order Modeling using Shallow ReLU Networks with Grassmann Layers

no code implementations17 Dec 2020 Kayla Bollinger, Hayden Schaeffer

The neural network takes the form of a "three-layer" network with the first layer constrained to lie on the Grassmann manifold and the first activation function set to identity, while the remaining network is a standard two-layer ReLU neural network.

Extending the step-size restriction for gradient descent to avoid strict saddle points

no code implementations5 Aug 2019 Hayden Schaeffer, Scott G. McCalla

We provide larger step-size restrictions for which gradient descent based algorithms (almost surely) avoid strict saddle points.

Recovery guarantees for polynomial approximation from dependent data with outliers

no code implementations25 Nov 2018 Lam Si Tung Ho, Hayden Schaeffer, Giang Tran, Rachel Ward

In this work, we study the problem of learning nonlinear functions from corrupted and dependent data.

Forward Stability of ResNet and Its Variants

no code implementations24 Nov 2018 Linan Zhang, Hayden Schaeffer

In this work, we show that the post-activation ResNet is related to an optimal control problem with differential inclusions, and provide continuous-time stability results for the differential inclusion associated with ResNet.

On the Convergence of the SINDy Algorithm

1 code implementation16 May 2018 Linan Zhang, Hayden Schaeffer

In this work, we provide some theoretical results on the behavior and convergence of the algorithm proposed in [6].

Optimization and Control Information Theory Information Theory

Sparse model selection via integral terms

no code implementations PHYSICAL REVIEW E 96 2017 Hayden Schaeffer, Scott G. McCalla

Model selection and parameter estimation are important for the effective integration of experimental data, scientific theory, and precise simulations.

Model Selection regression

Cannot find the paper you are looking for? You can Submit a new open access paper.