A2C, or Advantage Actor Critic, is a synchronous version of the A3C policy gradient method. As an alternative to the asynchronous implementation of A3C, A2C is a synchronous, deterministic implementation that waits for each actor to finish its segment of experience before updating, averaging over all of the actors. This more effectively uses GPUs due to larger batch sizes.
Image Credit: OpenAI Baselines
Source: Asynchronous Methods for Deep Reinforcement LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 52 | 22.32% |
Reinforcement Learning | 49 | 21.03% |
Deep Reinforcement Learning | 30 | 12.88% |
Decision Making | 12 | 5.15% |
Atari Games | 10 | 4.29% |
OpenAI Gym | 5 | 2.15% |
Continuous Control | 5 | 2.15% |
Benchmarking | 4 | 1.72% |
Management | 4 | 1.72% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |