Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent. The task is to train an agent to play the game and be superior to other players.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53, 000 professional games, and reinforcement learning, training on games played between different versions of the network.
We propose Mo\"ET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.
The evaluation function for imperfect information games is always hard to define but owns a significant impact on the playing strength of a program.
Reinforcement learning has seen great advancements in the past five years.
This paper applies a genetic algorithm and fuzzy markup language to construct a human and smart machine cooperative learning system on game of Go.
We compare a novel Knowledge-based Reinforcement Learning (KB-RL) approach with the traditional Neural Network (NN) method in solving a classical task of the Artificial Intelligence (AI) field.
This paper presents a semantic brain computer interface (BCI) agent with particle swarm optimization (PSO) based on a Fuzzy Markup Language (FML) for Go learning and prediction applications.
This paper presents a new meta-modeling framework to employ deep reinforcement learning (DRL) to generate mechanical constitutive models for interfaces.