no code implementations • 4 Mar 2024 • Liang Zhang, Jionghao Lin, Conrad Borchers, John Sabatini, John Hollander, Meng Cao, Xiangen Hu
This research is motivated by the potential of LLMs to predict learning performance based on its inherent reasoning and computational capabilities.
no code implementations • 4 Feb 2024 • Zifei, Han, Jionghao Lin, Ashish Gurung, Danielle R. Thomas, Eason Chen, Conrad Borchers, Shivang Gupta, Kenneth R. Koedinger
The results indicate that the RAG prompt demonstrated more accurate performance (assessed by the level of hallucination and correctness in the generated assessment texts) and lower financial costs than the other strategies evaluated.
no code implementations • 29 Jan 2024 • Liang Zhang, Jionghao Lin, Conrad Borchers, Meng Cao, Xiangen Hu
Learning performance data (e. g., quiz scores and attempts) is significant for understanding learner engagement and knowledge mastery level.
1 code implementation • 17 Dec 2023 • Conrad Borchers, Yeyu Wang, Shamya Karumbaiah, Muhammad Ashiq, David Williamson Shaffer, Vincent Aleven
Taken together, offering early conceptual support to students with low learning rates could make classroom practice with AI tutors more effective.
1 code implementation • 9 Dec 2023 • Conrad Borchers, Jiayi Zhang, Ryan S. Baker, Vincent Aleven
We discuss system re-design opportunities to add SRL support during stages of processing and paths forward for using machine learning to speed research depending on the assessment of SRL based on transcription of think-aloud data.
1 code implementation • 20 Dec 2022 • Conrad Borchers, Zachary A. Pardos
Course load analytics (CLA) inferred from LMS and enrollment features can offer a more accurate representation of course workload to students than credit hours and potentially aid in their course selection decisions.
1 code implementation • NAACL (GeBNLP) 2022 • Conrad Borchers, Dalia Sara Gala, Benjamin Gilburt, Eduard Oravkin, Wilfried Bounsi, Yuki M. Asano, Hannah Rose Kirk
The growing capability and availability of generative language models has enabled a wide range of new downstream tasks.