Paper

Visualizing the Loss Landscape of Neural Nets

Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better... (read more)

Results in Papers With Code
(↓ scroll down to see all results)