Card games involve playing cards: the task is to train an agent to play the game with specified rules and beat other players.
The goal of RLCard is to bridge reinforcement learning and imperfect information games, and push forward the research of reinforcement learning in domains with multiple agents, large state and action space, and sparse reward.
We introduce a new virtual environment for simulating a card game known as "Big 2".
When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged.
Many poker systems, whether created with heuristics or machine learning, rely on the probability of winning as a key input.
Jass is a very popular card game in Switzerland and is closely connected with Swiss culture.
Deck building is a crucial component in playing Collectible Card Games (CCGs).
This survey explores Procedural Content Generation via Machine Learning (PCGML), defined as the generation of game content using machine learning models trained on existing content.
The contributions of this paper include: (1) a novel representation for poker games, extendable to different poker variations, (2) a CNN based learning model that can effectively learn the patterns in three different games, and (3) a self-trained system that significantly beats the heuristic-based program on which it is trained, and our system is competitive against human expert players.