On Fundamental Limits of Robust Learning

30 Mar 2017  ·  Jiashi Feng ·

We consider the problems of robust PAC learning from distributed and streaming data, which may contain malicious errors and outliers, and analyze their fundamental complexity questions. In particular, we establish lower bounds on the communication complexity for distributed robust learning performed on multiple machines, and on the space complexity for robust learning from streaming data on a single machine. These results demonstrate that gaining robustness of learning algorithms is usually at the expense of increased complexities. As far as we know, this work gives the first complexity results for distributed and online robust PAC learning.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here