Meta Dropout: Learning to Perturb Features for Generalization

30 May 2019Hae Beom LeeTaewook NamEunho YangSung Ju Hwang

A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we know how to optimally perturb training examples to account for test examples, we may achieve better generalization performance... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.