2 code implementations • 13 Feb 2018 • Mehmet Eren Ahsen, Robert Vogel, Gustavo Stolovitzky
Learning algorithms that aggregate predictions from an ensemble of diverse base classifiers consistently outperform individual methods.
no code implementations • 22 Oct 2017 • Mehmet Eren Ahsen, Mathukumalli Vidyasagar
By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a $k$-sparse vector with $O(k \lg (n/k))$ measurements, given only the signs of the measurement vector.
no code implementations • 30 Oct 2014 • Mehmet Eren Ahsen, Niharika Challapalli, Mathukumalli Vidyasagar
It is shown here that SGL achieves robust sparse recovery, and also achieves a version of the grouping effect in that coefficients of highly correlated columns belonging to the same group of the measurement (or design) matrix are assigned roughly comparable values.
no code implementations • 26 Jan 2014 • Mehmet Eren Ahsen, Mathukumalli Vidyasagar
In a recent paper, it is shown that the LASSO algorithm exhibits "near-ideal behavior," in the following sense: Suppose $y = Az + \eta$ where $A$ satisfies the restricted isometry property (RIP) with a sufficiently small constant, and $\Vert \eta \Vert_2 \leq \epsilon$.