Transfer Learning via Unsupervised Task Discovery for Visual Question Answering

CVPR 2019 Hyeonwoo NohTaehoon KimJonghwan MunBohyung Han

We study how to leverage off-the-shelf visual and linguistic data to cope with out-of-vocabulary answers in visual question answering task. Existing large-scale visual datasets with annotations such as image class labels, bounding boxes and region descriptions are good sources for learning rich and diverse visual concepts... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.