Chart Question Answering (CQA) is the task of answering natural language questions about visualisations in the chart image.
The, thus, obtained segmentation map is fed into a network to compute the extrapolated semantic segmentation and the corresponding panoptic segmentation maps.
In this paper, we propose a novel task - MIMOQA - Multimodal Input Multimodal Output Question Answering in which the output is also multimodal.
In this paper, we propose a Director-Generator framework to rewrite content in the target author's style, specifically focusing on certain target attributes.
While recent advances in language modeling have resulted in powerful generation models, their generation style remains implicitly dependent on the training data and can not emulate a specific target style.
We model FOL parsing as a sequence to sequence mapping task where given a natural language sentence, it is encoded into an intermediate representation using an LSTM followed by a decoder which sequentially generates the predicates in the corresponding FOL formula.
Recently, research efforts have gained pace to cater to varied user preferences while generating text summaries.