Distractor Generation
13 papers with code • 1 benchmarks • 2 datasets
Given a passage, a question, and an answer phrase, the goal of distractor generation (DG) is to generate context-related wrong options (i.e., distractor) for multiple-choice questions (MCQ).
Most implemented papers
Generating Distractors for Reading Comprehension Questions from Real Examinations
We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations.
Distractor Generation for Multiple Choice Questions Using Learning to Rank
We investigate how machine learning models, specifically ranking models, can be used to select useful distractors for multiple choice questions.
A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies
In this paper, we investigate the following two limitations for the existing distractor generation (DG) methods.
A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies.
In this paper, we investigate the following two limitations for the existing distractor generation (DG) methods.
Quiz-Style Question Generation for News Stories
As a first step towards measuring news informedness at a scale, we study the problem of quiz-style multiple-choice question generation, which may be used to survey users about their knowledge of recent news.
ZmBART: An Unsupervised Cross-lingual Transfer Framework for Language Generation
In this framework, we further pre-train mBART sequence-to-sequence denoising auto-encoder model with an auxiliary task using monolingual data of three languages.
BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset
An important part when constructing multiple-choice questions (MCQs) for reading comprehension assessment are the distractors, the incorrect but preferably plausible answer options.
EduQG: A Multi-format Multiple Choice Dataset for the Educational Domain
Thus, our versatile dataset can be used for both question and distractor generation, as well as to explore new challenges such as question format conversion.
Distractor generation for multiple-choice questions with predictive prompting and large language models
We also show the gains of our approach 1 in generating high-quality distractors by comparing it with a zero-shot ChatGPT and a few-shot ChatGPT prompted with static examples.
BRAINTEASER: Lateral Thinking Puzzles for Large Language Models
The success of language models has inspired the NLP community to attend to tasks that require implicit and complex reasoning, relying on human-like commonsense mechanisms.