Gradient Checkpointing is a method used for reducing the memory footprint when training deep neural networks, at the cost of having a small increase in computation time.
Source: Training Deep Nets with Sublinear Memory CostPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 2 | 20.00% |
MRI Reconstruction | 1 | 10.00% |
Classification | 1 | 10.00% |
Zero-Shot Transfer Image Classification | 1 | 10.00% |
Language Modelling | 1 | 10.00% |
Text Generation | 1 | 10.00% |
Self-Supervised Image Classification | 1 | 10.00% |
Self-Supervised Learning | 1 | 10.00% |
Semi-Supervised Image Classification | 1 | 10.00% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |