Search Results for author: Sohail Bahmani

Found 9 papers, 0 papers with code

Instance-dependent uniform tail bounds for empirical processes

no code implementations21 Sep 2022 Sohail Bahmani

We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class.

Max-Linear Regression by Convex Programming

no code implementations12 Mar 2021 Seonho Kim, Sohail Bahmani, Kiryung Lee

When the $k$ linear components are equally likely to achieve the maximum, our result shows a sufficient number of noise-free observations for exact recovery scales as {$k^{4}p$} up to a logarithmic factor.


Low-Rank Matrix Estimation From Rank-One Projections by Unlifted Convex Optimization

no code implementations6 Apr 2020 Sohail Bahmani, Kiryung Lee

We study an estimator with a convex formulation for recovery of low-rank matrices from rank-one projections.

Convex Programming for Estimation in Nonlinear Recurrent Models

no code implementations26 Aug 2019 Sohail Bahmani, Justin Romberg

We propose a formulation for nonlinear recurrent models that includes simple parametric models of recurrent neural networks as a special case.

Estimation from Non-Linear Observations via Convex Programming with Application to Bilinear Regression

no code implementations19 Jun 2018 Sohail Bahmani

We propose a computationally efficient estimator, formulated as a convex program, for a broad class of non-linear regression problems that involve difference of convex (DC) non-linearities.


Solving Equations of Random Convex Functions via Anchored Regression

no code implementations17 Feb 2017 Sohail Bahmani, Justin Romberg

We consider the question of estimating a solution to a system of equations that involve convex nonlinearities, a problem that is common in machine learning and signal processing.


Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation

no code implementations13 Oct 2016 Sohail Bahmani, Justin Romberg

We propose a flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal.

Learning Theory Retrieval

Learning Model-Based Sparsity via Projected Gradient Descent

no code implementations7 Sep 2012 Sohail Bahmani, Petros T. Boufounos, Bhiksha Raj

As an example we elaborate on application of the main results to estimation in Generalized Linear Model.

Cannot find the paper you are looking for? You can Submit a new open access paper.