Search Results for author: Victoria Yaneva

Found 19 papers, 1 papers with code

Using Linguistic Features to Predict the Response Process Complexity Associated with Answering Clinical MCQs

no code implementations EACL (BEA) 2021 Victoria Yaneva, Daniel Jurich, Le An Ha, Peter Baldwin

This study examines the relationship between the linguistic characteristics of a test item and the complexity of the response process required to answer it correctly.

Clustering Descriptive

The USMLE® Step 2 Clinical Skills Patient Note Corpus

no code implementations NAACL 2022 Victoria Yaneva, Janet Mee, Le Ha, Polina Harik, Michael Jodoin, Alex Mechaber

This paper presents a corpus of 43, 985 clinical patient notes (PNs) written by 35, 156 examinees during the high-stakes USMLE® Step 2 Clinical Skills examination.

Automated Prediction of Examinee Proficiency from Short-Answer Questions

no code implementations COLING 2020 Le An Ha, Victoria Yaneva, Polina Harik, Ravi Pandian, Amy Morales, Brian Clauser

This paper brings together approaches from the fields of NLP and psychometric measurement to address the problem of predicting examinee proficiency from responses to short-answer questions (SAQs).

Multiple-choice

Predicting the Difficulty and Response Time of Multiple Choice Questions Using Transfer Learning

no code implementations WS 2020 Kang Xue, Victoria Yaneva, Christopher Runyon, Peter Baldwin

The results indicate that, for our sample, transfer learning can improve the prediction of item difficulty when response time is used as an auxiliary task but not the other way around.

Multiple-choice Transfer Learning

Classifying Referential and Non-referential It Using Gaze

1 code implementation EMNLP 2018 Victoria Yaneva, Le An Ha, Richard Evans, Ruslan Mitkov

When processing a text, humans and machines must disambiguate between different uses of the pronoun it, including non-referential, nominal anaphoric or clause anaphoric ones.

POS

Predicting Item Survival for Multiple Choice Questions in a High-Stakes Medical Exam

no code implementations LREC 2020 Victoria Yaneva, Le An Ha, Peter Baldwin, Janet Mee

One of the most resource-intensive problems in the educational testing industry relates to ensuring that newly-developed exam questions can adequately distinguish between students of high and low ability.

Information Retrieval Multiple-choice +1

Automatic Question Answering for Medical MCQs: Can It go Further than Information Retrieval?

no code implementations RANLP 2019 Le An Ha, Victoria Yaneva

We present a novel approach to automatic question answering that does not depend on the performance of an information retrieval (IR) system and does not require that the training data come from the same source as the questions.

Information Retrieval Multiple-choice +2

A Survey of the Perceived Text Adaptation Needs of Adults with Autism

no code implementations RANLP 2019 Victoria Yaneva, Constantin Orasan, Le An Ha, Natalia Ponomareva

NLP approaches to automatic text adaptation often rely on user-need guidelines which are generic and do not account for the differences between various types of target groups.

Predicting the Difficulty of Multiple Choice Questions in a High-stakes Medical Exam

no code implementations WS 2019 Le An Ha, Victoria Yaneva, Peter Baldwin, Janet Mee

To accomplish this, we extract a large number of linguistic features and embedding types, as well as features quantifying the difficulty of the items for an automatic question-answering system.

Multiple-choice Question Answering

Automatic Distractor Suggestion for Multiple-Choice Tests Using Concept Embeddings and Information Retrieval

no code implementations WS 2018 Le An Ha, Victoria Yaneva

We frame the evaluation as a prediction task where we aim to {``}predict{''} the human-produced distractors used in large sets of medical questions, i. e. if a distractor generated by our system is good enough it is likely to feature among the list of distractors produced by the human item-writers.

Information Retrieval Multiple-choice +1

Effects of Lexical Properties on Viewing Time per Word in Autistic and Neurotypical Readers

no code implementations WS 2017 Sanja {\v{S}}tajner, Victoria Yaneva, Ruslan Mitkov, Simone Paolo Ponzetto

Eye tracking studies from the past few decades have shaped the way we think of word complexity and cognitive load: words that are long, rare and ambiguous are more difficult to read.

Lexical Simplification

Using Gaze Data to Predict Multiword Expressions

no code implementations RANLP 2017 Omid Rohanian, Shiva Taslimipoor, Victoria Yaneva, Le An Ha

In recent years gaze data has been increasingly used to improve and evaluate NLP models due to the fact that it carries information about the cognitive processing of linguistic phenomena.

Part-Of-Speech Tagging POS +1

Combining Multiple Corpora for Readability Assessment for People with Cognitive Disabilities

no code implementations WS 2017 Victoria Yaneva, Constantin Or{\u{a}}san, Richard Evans, Omid Rohanian

Given the lack of large user-evaluated corpora in disability-related NLP research (e. g. text simplification or readability assessment for people with cognitive disabilities), the question of choosing suitable training data for NLP models is not straightforward.

Text Simplification

Evaluating the Readability of Text Simplification Output for Readers with Cognitive Disabilities

no code implementations LREC 2016 Victoria Yaneva, Irina Temnikova, Ruslan Mitkov

This paper presents an approach for automatic evaluation of the readability of text simplification output for readers with cognitive disabilities.

Reading Comprehension Text Simplification

A Corpus of Text Data and Gaze Fixations from Autistic and Non-Autistic Adults

no code implementations LREC 2016 Victoria Yaneva, Irina Temnikova, Ruslan Mitkov

This division of the groups informs researchers on whether particular fixations were elicited from skillful or less-skillful readers and allows a fair between-group comparison for two levels of reading ability.

Multiple-choice POS +2

Cannot find the paper you are looking for? You can Submit a new open access paper.