no code implementations • NeurIPS 2016 • Murat A. Erdogdu, Lee H. Dicker, Mohsen Bayati
We study the problem of efficiently estimating the coefficients of generalized linear models (GLMs) in the large-scale setting where the number of observations $n$ is much larger than the number of predictors $p$, i. e. $n\gg p \gg 1$.
no code implementations • 21 Nov 2016 • Murat A. Erdogdu, Mohsen Bayati, Lee H. Dicker
Using this relation, we design an algorithm that achieves the same accuracy as the empirical risk minimizer through iterations that attain up to a cubic convergence rate, and that are cheaper than any batch optimization algorithm by at least a factor of $\mathcal{O}(p)$.
no code implementations • NeurIPS 2013 • Lee H. Dicker, Dean P. Foster
One of the salient features of our analysis is that the problems studied here are easier when the dimension of $x_i$ is large; in other words, prediction becomes easier when more context is provided.