Just as dropout prevents co-adaptation of activations, DropPath prevents co-adaptation of parallel paths in networks such as FractalNets by randomly dropping operands of the join layers. This discourages the network from using one input path as an anchor and another as a corrective term (a configuration that, if not prevented, is prone to overfitting). Two sampling strategies are:
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 6 | 20.00% |
Classification | 2 | 6.67% |
Object Detection | 2 | 6.67% |
Self-Supervised Learning | 2 | 6.67% |
Federated Learning | 2 | 6.67% |
Reinforcement Learning | 2 | 6.67% |
Reinforcement Learning (RL) | 2 | 6.67% |
ECG Classification | 1 | 3.33% |
Time Series Analysis | 1 | 3.33% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |