Multiple-choice

228 papers with code • 2 benchmarks • 7 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Multiple-choice models and implementations

Most implemented papers

A Simple Method for Commonsense Reasoning

tensorflow/models 7 Jun 2018

Commonsense reasoning is a long-standing challenge for deep learning.

A Joint Sequence Fusion Model for Video Question Answering and Retrieval

antoine77340/howto100m ECCV 2018

We present an approach named JSFusion (Joint Sequence Fusion) that can measure semantic similarity between any pairs of multimodal sequence data (e. g. a video clip and a language sentence).

Generating Distractors for Reading Comprehension Questions from Real Examinations

Evan-Gao/Distractor-Generation-RACE 8 Sep 2018

We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations.

Abductive Commonsense Reasoning

allenai/abductive-commonsense-reasoning ICLR 2020

Abductive reasoning is inference to the most plausible explanation.

MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension

jind11/MMM-MCQA 1 Oct 2019

Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language.

UnifiedQA: Crossing Format Boundaries With a Single QA System

allenai/unifiedqa Findings of the Association for Computational Linguistics 2020

As evidence, we use the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats.

LiveQA: A Question Answering Dataset over Sports Live

PKU-TANGENT/LiveQA CCL 2020

In this paper, we introduce LiveQA, a new question answering dataset constructed from play-by-play live broadcast.

Surface Form Competition: Why the Highest Probability Answer Isn't Always Right

peterwestuw/surface-form-competition 16 Apr 2021

Large language models have shown promising results in zero-shot settings (Brown et al., 2020; Radford et al., 2019).

When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset

reglab/casehold 18 Apr 2021

While a Transformer architecture (BERT) pretrained on a general corpus (Google Books and Wikipedia) improves performance, domain pretraining (using corpus of approximately 3. 5M decisions across all courts in the U. S. that is larger than BERT's) with a custom legal vocabulary exhibits the most substantial performance gains with CaseHOLD (gain of 7. 2% on F1, representing a 12% improvement on BERT) and consistent performance gains across two other legal tasks.

Option Tracing: Beyond Correctness Analysis in Knowledge Tracing

arghosh/OptionTracing 19 Apr 2021

Knowledge tracing refers to a family of methods that estimate each student's knowledge component/skill mastery level from their past responses to questions.