Search Results for author: Hiroaki Funayama

Found 6 papers, 1 papers with code

Reducing the Cost: Cross-Prompt Pre-Finetuning for Short Answer Scoring

1 code implementation26 Aug 2024 Hiroaki Funayama, Yuya Asazuma, Yuichiroh Matsubayashi, Tomoya Mizumoto, Kentaro Inui

Specifically, given that scoring rubrics and reference answers differ for each prompt, we utilize key phrases, or representative expressions that the answer should contain to increase scores, and train a SAS model to learn the relationship between key phrases and answers using already annotated prompts (i. e., cross-prompts).

Japanese-English Sentence Translation Exercises Dataset for Automatic Grading

no code implementations6 Mar 2024 Naoki Miura, Hiroaki Funayama, Seiya Kikuchi, Yuichiroh Matsubayashi, Yuya Iwase, Kentaro Inui

Using this dataset, we demonstrate the performance of baselines including finetuned BERT and GPT models with few-shot in-context learning.

Few-Shot Learning In-Context Learning +2

Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism

no code implementations23 Oct 2023 Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama

Large language models (LLMs) take advantage of step-by-step reasoning instructions, e. g., chain-of-thought (CoT) prompting.

Logical Reasoning Negation

Balancing Cost and Quality: An Exploration of Human-in-the-loop Frameworks for Automated Short Answer Scoring

no code implementations16 Jun 2022 Hiroaki Funayama, Tasuku Sato, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui

Towards guaranteeing high-quality predictions, we present the first study of exploring the use of human-in-the-loop framework for minimizing the grading cost while guaranteeing the grading quality by allowing a SAS model to share the grading task with a human grader.

Cannot find the paper you are looking for? You can Submit a new open access paper.