no code implementations • 21 Sep 2022 • Sohail Bahmani
We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class.
no code implementations • 28 Oct 2021 • Brighton Ancelin, Sohail Bahmani, Justin Romberg
We consider the "all-for-one" decentralized learning problem for generalized linear models.
no code implementations • 12 Mar 2021 • Seonho Kim, Sohail Bahmani, Kiryung Lee
When the $k$ linear components are equally likely to achieve the maximum, our result shows that a sufficient number of observations scales as $k^{2}p$ up to a logarithmic factor.
no code implementations • 6 Apr 2020 • Sohail Bahmani, Kiryung Lee
We study an estimator with a convex formulation for recovery of low-rank matrices from rank-one projections.
no code implementations • 26 Aug 2019 • Sohail Bahmani, Justin Romberg
We propose a formulation for nonlinear recurrent models that includes simple parametric models of recurrent neural networks as a special case.
no code implementations • 19 Jun 2018 • Sohail Bahmani
We propose a computationally efficient estimator, formulated as a convex program, for a broad class of non-linear regression problems that involve difference of convex (DC) non-linearities.
no code implementations • 17 Feb 2017 • Sohail Bahmani, Justin Romberg
We consider the question of estimating a solution to a system of equations that involve convex nonlinearities, a problem that is common in machine learning and signal processing.
no code implementations • 13 Oct 2016 • Sohail Bahmani, Justin Romberg
We propose a flexible convex relaxation for the phase retrieval problem that operates in the natural domain of the signal.
no code implementations • 7 Sep 2012 • Sohail Bahmani, Petros T. Boufounos, Bhiksha Raj
As an example we elaborate on application of the main results to estimation in Generalized Linear Model.