Search Results for author: Shisen Yue

Found 2 papers, 2 papers with code

Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times

1 code implementation3 Feb 2024 Byung-Doh Oh, Shisen Yue, William Schuler

Additionally, training dynamics reveal that during later training steps, all model variants learn to predict rare words and that larger model variants do so more accurately, which explains the detrimental effect of both training data amount and model size on fit to reading times.

Language Modelling

ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models

2 code implementations16 Apr 2023 Yikang Liu, Ziyin Zhang, Wanyang Zhang, Shisen Yue, Xiaojing Zhao, Xinyuan Cheng, Yiwen Zhang, Hai Hu

To address these challenges in English language teaching, we first present ArguGPT, a balanced corpus of 4, 038 argumentative essays generated by 7 GPT models in response to essay prompts from three sources: (1) in-class or homework exercises, (2) TOEFL and (3) GRE writing tasks.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.