no code implementations • 4 Jul 2018 • Nevrez Imamoglu, Wataru Shimoda, Chi Zhang, Yuming Fang, Asako Kanezaki, Keiji Yanai, Yoshifumi Nishida
Bottom-up and top-down visual cues are two types of information that helps the visual saliency models.
1 code implementation • CVPR 2018 • Asako Kanezaki, Yasuyuki Matsushita, Yoshifumi Nishida
We propose a Convolutional Neural Network (CNN)-based model "RotationNet," which takes multi-view images of an object as input and jointly estimates its pose and object category.