SAGA is a method in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem.
Source: SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite ObjectivesPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
BIG-bench Machine Learning | 7 | 25.93% |
Federated Learning | 3 | 11.11% |
Bilevel Optimization | 2 | 7.41% |
Interactive Segmentation | 1 | 3.70% |
Scene Understanding | 1 | 3.70% |
Entity Linking | 1 | 3.70% |
Fact Verification | 1 | 3.70% |
Knowledge Graph Embeddings | 1 | 3.70% |
Knowledge Graphs | 1 | 3.70% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |