Prediction bounds for higher order total variation regularized least squares

24 Apr 2019  ·  Francesco Ortelli, Sara van de Geer ·

We establish adaptive results for trend filtering: least squares estimation with a penalty on the total variation of $(k-1)^{\rm th}$ order differences. Our approach is based on combining a general oracle inequality for the $\ell_1$-penalized least squares estimator with "interpolating vectors" to upper-bound the "effective sparsity". This allows one to show that the $\ell_1$-penalty on the $k^{\text{th}}$ order differences leads to an estimator that can adapt to the number of jumps in the $(k-1)^{\text{th}}$ order differences of the underlying signal or an approximation thereof. We show the result for $k \in \{1,2,3,4\}$ and indicate how it could be derived for general $k\in \mathbb{N}$.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here