no code implementations • 16 Feb 2024 • Michael Penwarden, Houman Owhadi, Robert M. Kirby
This topic encompasses a broad array of methods and models aimed at solving a single or a collection of PDE problems, called multitask learning.
1 code implementation • 12 Feb 2024 • Biraj Pandey, Bamdad Hosseini, Pau Batlle, Houman Owhadi
This article presents a general framework for the transport of probability measures towards minimum divergence generative modeling and sampling using ordinary differential equations (ODEs) and Reproducing Kernel Hilbert Spaces (RKHSs), inspired by ideas from diffeomorphic matching and image registration.
1 code implementation • 28 Nov 2023 • Théo Bourdais, Pau Batlle, Xianjin Yang, Ricardo Baptista, Nicolas Rouquette, Houman Owhadi
Type 1: Approximate an unknown function given input/output data.
no code implementations • 21 Nov 2023 • Boumediene Hamzi, Marcus Hutter, Houman Owhadi
Machine Learning (ML) and Algorithmic Information Theory (AIT) look at Complexity from different points of view.
no code implementations • 8 May 2023 • Pau Batlle, Yifan Chen, Bamdad Hosseini, Houman Owhadi, Andrew M Stuart
We introduce a priori Sobolev-space error estimates for the solution of nonlinear, and possibly parametric, PDEs using Gaussian process and kernel based methods.
1 code implementation • 26 Apr 2023 • Pau Batlle, Matthieu Darcy, Bamdad Hosseini, Houman Owhadi
We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Net (DeepONet) [Lu et al.] and Fourier Neural Operator (FNO) [Li et al.].
1 code implementation • 3 Apr 2023 • Yifan Chen, Houman Owhadi, Florian Schäfer
The primary goal of this paper is to provide a near-linear complexity algorithm for working with such kernel matrices.
1 code implementation • 24 Jan 2023 • Lu Yang, Xiuwen Sun, Boumediene Hamzi, Houman Owhadi, Naiming Xie
In this paper, we introduce the method of \emph{Sparse Kernel Flows } in order to learn the ``best'' kernel by starting from a large dictionary of kernels.
no code implementations • 14 Dec 2022 • Kamaludin Dingle, Pau Batlle, Houman Owhadi
Here we explore how algorithmic information theory, especially algorithmic probability, may aid in a machine learning task.
no code implementations • 24 Sep 2022 • Matthieu Darcy, Boumediene Hamzi, Giulia Livieri, Houman Owhadi, Peyman Tavallali
(2) Complete the graph (approximate unknown functions and random variables) via Maximum a Posteriori Estimation (given the data) with Gaussian Process (GP) priors on the unknown functions.
no code implementations • 21 Sep 2022 • Houman Owhadi
As in Smoothed Particle Hydrodynamics (SPH), GPH is a Lagrangian particle-based approach involving the tracking of a finite number of particles transported by the flow.
no code implementations • 3 Jun 2022 • Jean-Luc Akian, Luc Bonnet, Houman Owhadi, Éric Savin
This paper introduces algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques.
no code implementations • 8 Dec 2021 • Hamed Hamze Bajgiran, Houman Owhadi
Under these four steps, we show that all rational/consistent aggregation rules are as follows: Give each individual Pareto optimal model a weight, introduce a weak order/ranking over the set of Pareto optimal models, aggregate a finite set of models S as the model associated with the prior obtained as the weighted average of the priors of the highest-ranked models in S. This result shows that all rational/consistent aggregation rules must follow a generalization of hierarchical Bayesian modeling.
1 code implementation • 25 Nov 2021 • Jonghyeon Lee, Edward De Brouwer, Boumediene Hamzi, Houman Owhadi
A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel.
no code implementations • 23 Nov 2021 • Hamed Hamze Bajgiran, Houman Owhadi
In this paper, we show that all rational aggregation rules are of the form (3).
no code implementations • 20 Oct 2021 • Houman Owhadi
The underlying problem could therefore also be interpreted as a generalization of that of solving linear systems of equations to that of approximating unknown variables and functions with noisy, incomplete, and nonlinear dependencies.
1 code implementation • 24 Aug 2021 • Hamed Hamze Bajgiran, Pau Batlle Franch, Houman Owhadi, Mostafa Samir, Clint Scovel, Mahdy Shirdel, Michael Stanley, Peyman Tavallali
Although (C) leads to the identification of an optimal prior, its approximation suffers from the curse of dimensionality and the notion of risk is one that is averaged with respect to the distribution of the data.
2 code implementations • 24 Mar 2021 • Yifan Chen, Bamdad Hosseini, Houman Owhadi, Andrew M Stuart
The main idea of our method is to approximate the solution of a given PDE as the maximum a posteriori (MAP) estimator of a Gaussian process conditioned on solving the PDE at a finite number of collocation points.
no code implementations • 18 Mar 2021 • Peyman Tavallali, Hamed Hamze Bajgiran, Danial J. Esaid, Houman Owhadi
The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution.
no code implementations • 13 Feb 2021 • Boumediene Hamzi, Romit Maulik, Houman Owhadi
Modeling geophysical processes as low-dimensional dynamical systems and regressing their vector field from data is a promising approach for learning emulators of such systems.
1 code implementation • 10 Aug 2020 • Houman Owhadi
We show that ResNets (and their GP generalization) converge, in the infinite depth limit, to a generalization of image registration variational algorithms.
no code implementations • 9 Jul 2020 • Boumediene Hamzi, Houman Owhadi
Regressing the vector field of a dynamical system from a finite number of observed states is a natural way to learn surrogate models for such systems.
3 code implementations • 17 Jun 2020 • Florian Schäfer, Anima Anandkumar, Houman Owhadi
Finally, we obtain the next iterate by following this direction according to the dual geometry induced by the Bregman potential.
no code implementations • 22 May 2020 • Yifan Chen, Houman Owhadi, Andrew M. Stuart
The purpose of this paper is to study two paradigms of learning hierarchical parameters: one is from the probabilistic Bayesian perspective, in particular, the empirical Bayes approach that has been largely used in Bayesian statistics; the other is from the deterministic and approximation theoretic view, and in particular the kernel flow algorithm that was proposed recently in the machine learning literature.
1 code implementation • 29 Apr 2020 • Florian Schäfer, Matthias Katzfuss, Houman Owhadi
We propose to compute a sparse approximate inverse Cholesky factor $L$ of a dense covariance matrix $\Theta$ by minimizing the Kullback-Leibler divergence between the Gaussian distributions $\mathcal{N}(0, \Theta)$ and $\mathcal{N}(0, L^{-\top} L^{-1})$, subject to a sparsity constraint.
Numerical Analysis Numerical Analysis Optimization and Control Statistics Theory Computation Statistics Theory
1 code implementation • arXiv 2020 • Gene Ryan Yoo, Houman Owhadi
We introduce a new regularization method for Artificial Neural Networks (ANNs) based on Kernel Flows (KFs).
Ranked #1 on Image Classification on QMNIST
1 code implementation • 19 Jul 2019 • Houman Owhadi, Clint Scovel, Gene Ryan Yoo
Mode decomposition is a prototypical pattern recognition problem that can be addressed from the (a priori distinct) perspectives of numerical approximation, statistical inference and deep learning.
1 code implementation • 13 Aug 2018 • Houman Owhadi, Gene Ryan Yoo
Kriging offers a solution to this problem based on the prior specification of a kernel.
1 code implementation • 7 Jun 2017 • Florian Schäfer, T. J. Sullivan, Houman Owhadi
This block-factorisation can provably be obtained in complexity $\mathcal{O} ( N \log( N ) \log^{d}( N /\epsilon) )$ in space and $\mathcal{O} ( N \log^{2}( N ) \log^{2d}( N /\epsilon) )$ in time.
Numerical Analysis Computational Complexity Data Structures and Algorithms Probability 65F30, 42C40, 65F50, 65N55, 65N75, 60G42, 68Q25, 68W40
no code implementations • 31 Mar 2017 • Houman Owhadi, Clint Scovel
When the solution space is a Banach space $B$ endowed with a quadratic norm $\|\cdot\|$, the optimal measure (mixed strategy) for such games (e. g. the adversarial recovery of $u\in B$, given partial measurements $[\phi_i, u]$ with $\phi_i\in B^*$, using relative error in $\|\cdot\|$-norm as a loss) is a centered Gaussian field $\xi$ solely determined by the norm $\|\cdot\|$, whose conditioning (on measurements) produces optimal bets.
no code implementations • 24 Jun 2016 • Houman Owhadi, Lei Zhang
Implicit schemes are popular methods for the integration of time dependent PDEs such as hyperbolic and parabolic PDEs.
no code implementations • 10 Aug 2015 • Houman Owhadi, Clint Scovel
The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information.
no code implementations • 11 Mar 2015 • Houman Owhadi
The resulting elementary gambles form a hierarchy of (deterministic) basis functions of $H^1_0(\Omega)$ (gamblets) that (1) are orthogonal across subscales/subbands with respect to the scalar product induced by the energy norm of the PDE (2) enable sparse compression of the solution space in $H^1_0(\Omega)$ (3) induce an orthogonal multiresolution operator decomposition.
no code implementations • 16 Aug 2007 • Nawaf Bou-Rabee, Houman Owhadi
This paper presents a continuous and discrete Lagrangian theory for stochastic Hamiltonian systems on manifolds.
Probability 65Cxx; 37Jxx