A Novel Framework for Online Supervised Learning with Feature Selection

30 Mar 2018  ·  Lizhe Sun, Mingyuan Wang, Adrian Barbu ·

Current online learning methods suffer issues such as lower convergence rates and limited capability to recover the support of the true features compared to their offline counterparts. In this paper, we present a novel framework for online learning based on running averages and introduce a series of online versions of popular offline methods such as Elastic Net, Minimax Concave Penalty, and Feature Selection with Annealing. The framework can handle an arbitrarily large number of observations with the restriction that the data dimension is not too large, e.g. p<50,000. We prove the equivalence between our online methods and their offline counterparts and give theoretical true feature recovery and convergence guarantees for some of them. In contrast to existing online methods, the proposed methods can extract models with any desired sparsity level at any time. Numerical experiments indicate that our new methods enjoy high true feature recovery accuracy and a fast convergence rate, compared with standard online and offline algorithms. We also show how the running averages framework can be used for model adaptation in the presence of model drift. Finally, we present applications to large datasets where again the proposed framework shows competitive results compared to popular online and offline algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here