Search Results for author: Houman Owhadi

Found 34 papers, 15 papers with code

Kolmogorov n-Widths for Multitask Physics-Informed Machine Learning (PIML) Methods: Towards Robust Metrics

no code implementations16 Feb 2024 Michael Penwarden, Houman Owhadi, Robert M. Kirby

This topic encompasses a broad array of methods and models aimed at solving a single or a collection of PDE problems, called multitask learning.

Physics-informed machine learning

Diffeomorphic Measure Matching with Kernels for Generative Modeling

1 code implementation12 Feb 2024 Biraj Pandey, Bamdad Hosseini, Pau Batlle, Houman Owhadi

This article presents a general framework for the transport of probability measures towards minimum divergence generative modeling and sampling using ordinary differential equations (ODEs) and Reproducing Kernel Hilbert Spaces (RKHSs), inspired by ideas from diffeomorphic matching and image registration.

Image Registration

Bridging Algorithmic Information Theory and Machine Learning: A New Approach to Kernel Learning

no code implementations21 Nov 2023 Boumediene Hamzi, Marcus Hutter, Houman Owhadi

Machine Learning (ML) and Algorithmic Information Theory (AIT) look at Complexity from different points of view.

Error Analysis of Kernel/GP Methods for Nonlinear and Parametric PDEs

no code implementations8 May 2023 Pau Batlle, Yifan Chen, Bamdad Hosseini, Houman Owhadi, Andrew M Stuart

We introduce a priori Sobolev-space error estimates for the solution of nonlinear, and possibly parametric, PDEs using Gaussian process and kernel based methods.

Kernel Methods are Competitive for Operator Learning

1 code implementation26 Apr 2023 Pau Batlle, Matthieu Darcy, Bamdad Hosseini, Houman Owhadi

We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Net (DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.].

Operator learning Uncertainty Quantification

Sparse Cholesky Factorization for Solving Nonlinear PDEs via Gaussian Processes

1 code implementation3 Apr 2023 Yifan Chen, Houman Owhadi, Florian Schäfer

The primary goal of this paper is to provide a near-linear complexity algorithm for working with such kernel matrices.

Gaussian Processes

Learning Dynamical Systems from Data: A Simple Cross-Validation Perspective, Part V: Sparse Kernel Flows for 132 Chaotic Dynamical Systems

1 code implementation24 Jan 2023 Lu Yang, Xiuwen Sun, Boumediene Hamzi, Houman Owhadi, Naiming Xie

In this paper, we introduce the method of \emph{Sparse Kernel Flows } in order to learn the ``best'' kernel by starting from a large dictionary of kernels.

Multiclass classification utilising an estimated algorithmic probability prior

no code implementations14 Dec 2022 Kamaludin Dingle, Pau Batlle, Houman Owhadi

Here we explore how algorithmic information theory, especially algorithmic probability, may aid in a machine learning task.

Classification

One-Shot Learning of Stochastic Differential Equations with Data Adapted Kernels

no code implementations24 Sep 2022 Matthieu Darcy, Boumediene Hamzi, Giulia Livieri, Houman Owhadi, Peyman Tavallali

(2) Complete the graph (approximate unknown functions and random variables) via Maximum a Posteriori Estimation (given the data) with Gaussian Process (GP) priors on the unknown functions.

One-Shot Learning

Gaussian Process Hydrodynamics

no code implementations21 Sep 2022 Houman Owhadi

As in Smoothed Particle Hydrodynamics (SPH), GPH is a Lagrangian particle-based approach involving the tracking of a finite number of particles transported by the flow.

Learning "best" kernels from data in Gaussian process regression. With application to aerodynamics

no code implementations3 Jun 2022 Jean-Luc Akian, Luc Bonnet, Houman Owhadi, Éric Savin

This paper introduces algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques.

regression

Aggregation of Pareto optimal models

no code implementations8 Dec 2021 Hamed Hamze Bajgiran, Houman Owhadi

Under these four steps, we show that all rational/consistent aggregation rules are as follows: Give each individual Pareto optimal model a weight, introduce a weak order/ranking over the set of Pareto optimal models, aggregate a finite set of models S as the model associated with the prior obtained as the weighted average of the priors of the highest-ranked models in S. This result shows that all rational/consistent aggregation rules must follow a generalization of hierarchical Bayesian modeling.

Computational Graph Completion

no code implementations20 Oct 2021 Houman Owhadi

The underlying problem could therefore also be interpreted as a generalization of that of solving linear systems of equations to that of approximating unknown variables and functions with noisy, incomplete, and nonlinear dependencies.

Dimensionality Reduction Gaussian Processes +1

Uncertainty Quantification of the 4th kind; optimal posterior accuracy-uncertainty tradeoff with the minimum enclosing ball

1 code implementation24 Aug 2021 Hamed Hamze Bajgiran, Pau Batlle Franch, Houman Owhadi, Mostafa Samir, Clint Scovel, Mahdy Shirdel, Michael Stanley, Peyman Tavallali

Although (C) leads to the identification of an optimal prior, its approximation suffers from the curse of dimensionality and the notion of risk is one that is averaged with respect to the distribution of the data.

Bayesian Inference Uncertainty Quantification

Solving and Learning Nonlinear PDEs with Gaussian Processes

2 code implementations24 Mar 2021 Yifan Chen, Bamdad Hosseini, Houman Owhadi, Andrew M Stuart

The main idea of our method is to approximate the solution of a given PDE as the maximum a posteriori (MAP) estimator of a Gaussian process conditioned on solving the PDE at a finite number of collocation points.

Gaussian Processes

Decision Theoretic Bootstrapping

no code implementations18 Mar 2021 Peyman Tavallali, Hamed Hamze Bajgiran, Danial J. Esaid, Houman Owhadi

The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution.

Uncertainty Quantification

Data-driven geophysical forecasting: Simple, low-cost, and accurate baselines with kernel methods

no code implementations13 Feb 2021 Boumediene Hamzi, Romit Maulik, Houman Owhadi

Modeling geophysical processes as low-dimensional dynamical systems and regressing their vector field from data is a promising approach for learning emulators of such systems.

Do ideas have shape? Idea registration as the continuous limit of artificial neural networks

1 code implementation10 Aug 2020 Houman Owhadi

We show that ResNets (and their GP generalization) converge, in the infinite depth limit, to a generalization of image registration variational algorithms.

Anatomy Gaussian Processes +1

Learning dynamical systems from data: a simple cross-validation perspective

no code implementations9 Jul 2020 Boumediene Hamzi, Houman Owhadi

Regressing the vector field of a dynamical system from a finite number of observed states is a natural way to learn surrogate models for such systems.

Competitive Mirror Descent

3 code implementations17 Jun 2020 Florian Schäfer, Anima Anandkumar, Houman Owhadi

Finally, we obtain the next iterate by following this direction according to the dual geometry induced by the Bregman potential.

Consistency of Empirical Bayes And Kernel Flow For Hierarchical Parameter Estimation

no code implementations22 May 2020 Yifan Chen, Houman Owhadi, Andrew M. Stuart

The purpose of this paper is to study two paradigms of learning hierarchical parameters: one is from the probabilistic Bayesian perspective, in particular, the empirical Bayes approach that has been largely used in Bayesian statistics; the other is from the deterministic and approximation theoretic view, and in particular the kernel flow algorithm that was proposed recently in the machine learning literature.

BIG-bench Machine Learning

Sparse Cholesky factorization by Kullback-Leibler minimization

1 code implementation29 Apr 2020 Florian Schäfer, Matthias Katzfuss, Houman Owhadi

We propose to compute a sparse approximate inverse Cholesky factor $L$ of a dense covariance matrix $\Theta$ by minimizing the Kullback-Leibler divergence between the Gaussian distributions $\mathcal{N}(0, \Theta)$ and $\mathcal{N}(0, L^{-\top} L^{-1})$, subject to a sparsity constraint.

Numerical Analysis Numerical Analysis Optimization and Control Statistics Theory Computation Statistics Theory

Kernel Mode Decomposition and programmable/interpretable regression networks

1 code implementation19 Jul 2019 Houman Owhadi, Clint Scovel, Gene Ryan Yoo

Mode decomposition is a prototypical pattern recognition problem that can be addressed from the (a priori distinct) perspectives of numerical approximation, statistical inference and deep learning.

GPR regression

Kernel Flows: from learning kernels from data into the abyss

1 code implementation13 Aug 2018 Houman Owhadi, Gene Ryan Yoo

Kriging offers a solution to this problem based on the prior specification of a kernel.

Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity

1 code implementation7 Jun 2017 Florian Schäfer, T. J. Sullivan, Houman Owhadi

This block-factorisation can provably be obtained in complexity $\mathcal{O} ( N \log( N ) \log^{d}( N /\epsilon) )$ in space and $\mathcal{O} ( N \log^{2}( N ) \log^{2d}( N /\epsilon) )$ in time.

Numerical Analysis Computational Complexity Data Structures and Algorithms Probability 65F30, 42C40, 65F50, 65N55, 65N75, 60G42, 68Q25, 68W40

Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis

no code implementations31 Mar 2017 Houman Owhadi, Clint Scovel

When the solution space is a Banach space $B$ endowed with a quadratic norm $\|\cdot\|$, the optimal measure (mixed strategy) for such games (e. g. the adversarial recovery of $u\in B$, given partial measurements $[\phi_i, u]$ with $\phi_i\in B^*$, using relative error in $\|\cdot\|$-norm as a loss) is a centered Gaussian field $\xi$ solely determined by the norm $\|\cdot\|$, whose conditioning (on measurements) produces optimal bets.

Gamblets for opening the complexity-bottleneck of implicit schemes for hyperbolic and parabolic ODEs/PDEs with rough coefficients

no code implementations24 Jun 2016 Houman Owhadi, Lei Zhang

Implicit schemes are popular methods for the integration of time dependent PDEs such as hyperbolic and parabolic PDEs.

Towards Machine Wald

no code implementations10 Aug 2015 Houman Owhadi, Clint Scovel

The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information.

Bayesian Inference Stochastic Optimization +1

Multigrid with rough coefficients and Multiresolution operator decomposition from Hierarchical Information Games

no code implementations11 Mar 2015 Houman Owhadi

The resulting elementary gambles form a hierarchy of (deterministic) basis functions of $H^1_0(\Omega)$ (gamblets) that (1) are orthogonal across subscales/subbands with respect to the scalar product induced by the energy norm of the PDE (2) enable sparse compression of the solution space in $H^1_0(\Omega)$ (3) induce an orthogonal multiresolution operator decomposition.

Stochastic Variational Integrators

no code implementations16 Aug 2007 Nawaf Bou-Rabee, Houman Owhadi

This paper presents a continuous and discrete Lagrangian theory for stochastic Hamiltonian systems on manifolds.

Probability 65Cxx; 37Jxx

Cannot find the paper you are looking for? You can Submit a new open access paper.