Generating Rationales in Visual Question Answering

Despite recent advances in Visual QuestionAnswering (VQA), it remains a challenge todetermine how much success can be attributedto sound reasoning and comprehension ability.We seek to investigate this question by propos-ing a new task ofrationale generation. Es-sentially, we task a VQA model with generat-ing rationales for the answers it predicts... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
ViLBERT
Representation Learning
Cosine Annealing
Learning Rate Schedules
Residual Connection
Skip Connections
Attention Dropout
Regularization
Linear Warmup With Cosine Annealing
Learning Rate Schedules
Discriminative Fine-Tuning
Fine-Tuning
BPE
Subword Segmentation
GELU
Activation Functions
Dense Connections
Feedforward Networks
Weight Decay
Regularization
Adam
Stochastic Optimization
Softmax
Output Functions
Dropout
Regularization
Multi-Head Attention
Attention Modules
Layer Normalization
Normalization
Scaled Dot-Product Attention
Attention Mechanisms
GPT-2
Transformers