A new kind of implicit models, where the output of the network is defined as the solution to an "infinite-level" fixed point equation. Thanks to this we can compute the gradient of the output without activations and therefore with a significantly reduced memory footprint.
Source: Deep Equilibrium ModelsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 3 | 11.11% |
Adversarial Robustness | 3 | 11.11% |
Denoising | 3 | 11.11% |
Object Detection | 2 | 7.41% |
Adversarial Defense | 2 | 7.41% |
Image Reconstruction | 2 | 7.41% |
Image Restoration | 1 | 3.70% |
Benchmarking | 1 | 3.70% |
Visual Question Answering (VQA) | 1 | 3.70% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |