Search Results for author: Yuichiroh Matsubayashi

Found 18 papers, 4 papers with code

Tell Me Who Your Students Are: GPT Can Generate Valid Multiple-Choice Questions When Students' (Mis)Understanding Is Hinted

no code implementations9 May 2025 Machi Shimmei, Masaki Uto, Yuichiroh Matsubayashi, Kentaro Inui, Aditi Mallavarapu, Noboru Matsuda

To evaluate the validity of the generated MCQs, Item Response Theory (IRT) was applied to compare item characteristics between MCQs generated by AnaQuest, a baseline ChatGPT prompt, and human-crafted items.

Language Modeling Language Modelling +4

Reducing the Cost: Cross-Prompt Pre-Finetuning for Short Answer Scoring

1 code implementation26 Aug 2024 Hiroaki Funayama, Yuya Asazuma, Yuichiroh Matsubayashi, Tomoya Mizumoto, Kentaro Inui

Specifically, given that scoring rubrics and reference answers differ for each prompt, we utilize key phrases, or representative expressions that the answer should contain to increase scores, and train a SAS model to learn the relationship between key phrases and answers using already annotated prompts (i. e., cross-prompts).

Japanese-English Sentence Translation Exercises Dataset for Automatic Grading

no code implementations6 Mar 2024 Naoki Miura, Hiroaki Funayama, Seiya Kikuchi, Yuichiroh Matsubayashi, Yuya Iwase, Kentaro Inui

Using this dataset, we demonstrate the performance of baselines including finetuned BERT and GPT models with few-shot in-context learning.

Few-Shot Learning In-Context Learning +2

Balancing Cost and Quality: An Exploration of Human-in-the-loop Frameworks for Automated Short Answer Scoring

no code implementations16 Jun 2022 Hiroaki Funayama, Tasuku Sato, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui

Towards guaranteeing high-quality predictions, we present the first study of exploring the use of human-in-the-loop framework for minimizing the grading cost while guaranteeing the grading quality by allowing a SAS model to share the grading task with a human grader.

Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution

1 code implementation EMNLP 2021 Ryuto Konno, Shun Kiyono, Yuichiroh Matsubayashi, Hiroki Ouchi, Kentaro Inui

Masked language models (MLMs) have contributed to drastic performance improvements with regard to zero anaphora resolution (ZAR).

An Empirical Study of Contextual Data Augmentation for Japanese Zero Anaphora Resolution

no code implementations COLING 2020 Ryuto Konno, Yuichiroh Matsubayashi, Shun Kiyono, Hiroki Ouchi, Ryo Takahashi, Kentaro Inui

This study addresses two underexplored issues on CDA, that is, how to reduce the computational cost of data augmentation and how to ensure the quality of the generated data.

Data Augmentation Language Modeling +5

Distance-Free Modeling of Multi-Predicate Interactions in End-to-End Japanese Predicate-Argument Structure Analysis

no code implementations COLING 2018 Yuichiroh Matsubayashi, Kentaro Inui

Capturing interactions among multiple predicate-argument structures (PASs) is a crucial issue in the task of analyzing PAS in Japanese.

Revisiting the Design Issues of Local Models for Japanese Predicate-Argument Structure Analysis

no code implementations IJCNLP 2017 Yuichiroh Matsubayashi, Kentaro Inui

The research trend in Japanese predicate-argument structure (PAS) analysis is shifting from pointwise prediction models with local features to global models designed to search for globally optimal solutions.

Modeling Context-sensitive Selectional Preference with Distributed Representations

no code implementations COLING 2016 Naoya Inoue, Yuichiroh Matsubayashi, Masayuki Ono, Naoaki Okazaki, Kentaro Inui

This paper proposes a novel problem setting of selectional preference (SP) between a predicate and its arguments, called as context-sensitive SP (CSP).

Semantic Role Labeling

Cannot find the paper you are looking for? You can Submit a new open access paper.