no code implementations • NeurIPS 2014 • Zhaoran Wang, Huanran Lu, Han Liu
In this paper, we propose a two-stage sparse PCA procedure that attains the optimal principal subspace estimator in polynomial time.
no code implementations • 22 Aug 2014 • Zhaoran Wang, Huanran Lu, Han Liu
To optimally estimate sparse principal subspaces, we propose a two-stage computational framework named "tighten after relax": Within the 'relax' stage, we approximately solve a convex relaxation of sparse PCA with early stopping to obtain a desired initial estimator; For the 'tighten' stage, we propose a novel algorithm called sparse orthogonal iteration pursuit (SOAP), which iteratively refines the initial estimator by directly solving the underlying nonconvex problem.
no code implementations • 1 Jul 2013 • Fang Han, Huanran Lu, Han Liu
In addition, we provide thorough experiments on both synthetic and real-world equity data to show that there are empirical advantages of our method over the lasso-type estimators in both parameter estimation and forecasting.