Based on the understanding that the flat local minima of the empirical risk cause the model to generalize better. Adversarial Model Perturbation (AMP) improves generalization via minimizing the AMP loss, which is obtained from the empirical risk by applying the worst norm-bounded perturbation on each point in the parameter space.
Source: Regularizing Neural Networks via Adversarial Model PerturbationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Denoising | 6 | 8.96% |
regression | 4 | 5.97% |
Action Detection | 4 | 5.97% |
Activity Detection | 4 | 5.97% |
Graph Neural Network | 2 | 2.99% |
Benchmarking | 2 | 2.99% |
Uncertainty Quantification | 2 | 2.99% |
Drug Discovery | 2 | 2.99% |
Protein Language Model | 2 | 2.99% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |