1 code implementation • ICLR 2020 • Daniel Gissin, Shai Shalev-Shwartz, Amit Daniely
A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity.
2 code implementations • ICLR 2019 • Daniel Gissin, Shai Shalev-Shwartz
We propose a new batch mode active learning algorithm designed for neural networks and large query batch sizes.