no code implementations • 17 Apr 2019 • Justin Le Louedec, Thomas Guntz, James Crowley, Dominique Vaufreydaz
The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces.