Statistical Optimality of Interpolated Nearest Neighbor Algorithms

5 Oct 2018  ·  Yue Xing, Qifan Song, Guang Cheng ·

In the era of deep learning, understanding over-fitting phenomenon becomes increasingly important. It is observed that carefully designed deep neural networks achieve small testing error even when the training error is close to zero. One possible explanation is that for many modern machine learning algorithms, over-fitting can greatly reduce the estimation bias, while not increasing the estimation variance too much. To illustrate the above idea, we prove that the proposed interpolated nearest neighbor algorithm achieves the minimax optimal rate in both regression and classification regimes, and observe that they are empirically better than the traditional $k$ nearest neighbor method in some cases.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here