Search Results for author: Sam Davanloo Tajbakhsh

Found 3 papers, 1 papers with code

Riemannian Stochastic Gradient Method for Nested Composition Optimization

no code implementations19 Jul 2022 Dewei Zhang, Sam Davanloo Tajbakhsh

For two-level composition optimization, we present a Riemannian Stochastic Composition Gradient Descent (R-SCGD) method that finds an approximate stationary point, with expected squared Riemannian gradient smaller than $\epsilon$, in $O(\epsilon^{-2})$ calls to the stochastic gradient oracle of the outer function and stochastic function and gradient oracles of the inner function.

Meta-Learning reinforcement-learning +1

Generalized Sparse Precision Matrix Selection for Fitting Multivariate Gaussian Random Fields to Large Data Sets

no code implementations11 May 2016 Sam Davanloo Tajbakhsh, Necdet Serhat Aybat, Enrique del Castillo

We present a new method for estimating multivariate, second-order stationary Gaussian Random Field (GRF) models based on the Sparse Precision matrix Selection (SPS) algorithm, proposed by Davanloo et al. (2015) for estimating scalar GRF models.

On the Theoretical Guarantees for Parameter Estimation of Gaussian Random Field Models: A Sparse Precision Matrix Approach

1 code implementation21 May 2014 Sam Davanloo Tajbakhsh, Necdet Serhat Aybat, Enrique del Castillo

Iterative methods for fitting a Gaussian Random Field (GRF) model via maximum likelihood (ML) estimation requires solving a nonconvex optimization problem.

Cannot find the paper you are looking for? You can Submit a new open access paper.