Learning Sparse Networks Using Targeted Dropout

31 May 2019Aidan N. GomezIvan ZhangSiddhartha Rao KamalakaraDivyam MadaanKevin SwerskyYarin GalGeoffrey E. Hinton

Neural networks are easier to optimise when they have many more weights than are required for modelling the mapping from inputs to outputs. This suggests a two-stage learning procedure that first learns a large net and then prunes away connections or hidden units... (read more)

PDF Abstract

Evaluation results from the paper


  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.