We present a non-asymptotic high-probability bound on the generalization error of the ridge regression problem under the model misspecification of having fake features.
We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks.
Our results show that fake features can significantly improve the estimation performance, even though they are not correlated with the features in the underlying system.
We illustrate how chance-constrained ActInf weights all imposed (prior) constraints on the generative model, allowing e. g., for a trade-off between robust control and empirical chance constraint violation.
We provide high-probability bounds on the generalization error for both isotropic and correlated Gaussian data as well as sub-gaussian data.