Generative Adversarial Imitation Learning presents a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning.
Source: Generative Adversarial Imitation LearningPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Imitation Learning | 29 | 48.33% |
Reinforcement Learning (RL) | 11 | 18.33% |
Continuous Control | 4 | 6.67% |
Autonomous Driving | 2 | 3.33% |
Autonomous Navigation | 2 | 3.33% |
Navigate | 1 | 1.67% |
Quantization | 1 | 1.67% |
Denoising | 1 | 1.67% |
D4RL | 1 | 1.67% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |