Science Question Answering
6 papers with code • 1 benchmarks • 2 datasets
Most implemented papers
Multimodal Chain-of-Thought Reasoning in Language Models
Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer.
Unification-based Reconstruction of Multi-hop Explanations for Science Questions
This paper presents a novel framework for reconstructing multi-hop explanations in science Question Answering (QA).
Dynamic Semantic Graph Construction and Reasoning for Explainable Multi-hop Science Question Answering
Our framework contains three new ideas: (a) {\tt AMR-SG}, an AMR-based Semantic Graph, constructed by candidate fact AMRs to uncover any hop relations among question, answer and multiple facts.
Exploiting Reasoning Chains for Multi-hop Science Question Answering
We propose a novel Chain Guided Retriever-reader ({\tt CGR}) framework to model the reasoning chain for multi-hop Science Question Answering.
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
We further design language models to learn to generate lectures and explanations as the chain of thought (CoT) to mimic the multi-hop reasoning process when answering ScienceQA questions.
Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering
We show the efficacy of our proposed approach in different tasks -- abductive reasoning, commonsense question answering, science question answering, and sentence completion.