no code implementations • 21 Jun 2025 • Meng Xia, Robin Schmucker, Conrad Borchers, Vincent Aleven
Mastery learning improves learning proficiency and efficiency.
no code implementations • 20 Jun 2025 • Danielle R. Thomas, Conrad Borchers, Shambhavi Bhushan, Erin Gatz, Shivang Gupta, Kenneth R. Koedinger
After adjusting for this effect, two out of seven lessons showed statistically significant learning benefits from LLM feedback with standardized effect sizes of 0. 28 and 0. 33.
no code implementations • 20 Jun 2025 • Danielle R. Thomas, Conrad Borchers, Jionghao Lin, Sanjit Kakarla, Shambhavi Bhushan, Erin Gatz, Shivang Gupta, Ralph Abboud, Kenneth R. Koedinger
Tutoring improves student achievement, but identifying and studying what tutoring actions are most associated with student learning at scale based on audio transcriptions is an open research problem.
no code implementations • 17 Jan 2025 • Vincent Aleven, Conrad Borchers, Yun Huang, Tomohiro Nagashima, Bruce McLaren, Paulo Carvalho, Octav Popescu, Jonathan Sewall, Kenneth Koedinger
This platform has been used to develop and conduct an estimated 147 research studies which have run in a wide variety of laboratory and real-world educational settings, including K-12 and higher education, and have addressed a wide range of research questions.
no code implementations • 15 Jan 2025 • Conrad Borchers, Danielle R. Thomas, Jionghao Lin, Ralph Abboud, Kenneth R. Koedinger
We conclude that integrating human and LLM-generated data to improve text classification models in assessment offers a scalable solution that leverages both the accuracy of human coding and the variety of LLM outputs.
no code implementations • 8 Jan 2025 • Conrad Borchers
Specifically, we address (1) whether ABROCA follows any known distribution, (2) how to reliably test for algorithmic bias using ABROCA, and (3) the statistical power achievable with ABROCA-based bias assessments under typical EDM sample specifications.
1 code implementation • 16 Dec 2024 • Devika Venugopalan, Ziwen Yan, Conrad Borchers, Jionghao Lin, Vincent Aleven
We contribute insights into how tutoring systems can best be merged with LLMs to support hybrid tutoring settings through conversational assistance, facilitating effective caregiver involvement in tutoring systems.
1 code implementation • 15 Dec 2024 • Danielle R. Thomas, Conrad Borchers, Sanjit Kakarla, Jionghao Lin, Shambhavi Bhushan, Boyuan Guo, Erin Gatz, Kenneth R. Koedinger
Advances in generative AI via large language models (LLMs) are being used in a wide range of applications, with this present work assessing its use in the equity domain.
1 code implementation • 13 Dec 2024 • Danielle R. Thomas, Conrad Borchers, Sanjit Kakarla, Jionghao Lin, Shambhavi Bhushan, Boyuan Guo, Erin Gatz, Kenneth R. Koedinger
Using a posttest-only randomized control design, we compare the performance of 234 tutors (790 lesson completions) across three conditions: MCQ only, open response only, and a combination of both.
no code implementations • 3 Dec 2024 • Valdemar Švábenský, Conrad Borchers, Elizabeth B. Cloude, Atsushi Shimada
This paper systematically compares data augmentation techniques and their impact on prediction performance in a typical LA task: prediction of academic outcomes.
1 code implementation • 28 Nov 2024 • Conrad Borchers, Ryan S. Baker
When assessing whether a classifier is biased, this skewness inflates ABROCA values by chance, even when data is drawn (by simulation) from populations with equivalent ROC curves.
1 code implementation • 24 Sep 2024 • Liang Zhang, Jionghao Lin, John Sabatini, Conrad Borchers, Daniel Weitekamp, Meng Cao, John Hollander, Xiangen Hu, Arthur C. Graesser
Second, a tensor factorization method is used to impute missing values in sparse tensors of collected learner data, thereby grounding the imputation on knowledge tracing tasks that predict missing performance values based on real observations.
no code implementations • 4 Mar 2024 • Liang Zhang, Jionghao Lin, Conrad Borchers, John Sabatini, John Hollander, Meng Cao, Xiangen Hu
This research is motivated by the potential of LLMs to predict learning performance based on its inherent reasoning and computational capabilities.
no code implementations • 4 Feb 2024 • Zifei, Han, Jionghao Lin, Ashish Gurung, Danielle R. Thomas, Eason Chen, Conrad Borchers, Shivang Gupta, Kenneth R. Koedinger
The results indicate that the RAG prompt demonstrated more accurate performance (assessed by the level of hallucination and correctness in the generated assessment texts) and lower financial costs than the other strategies evaluated.
1 code implementation • 29 Jan 2024 • Liang Zhang, Jionghao Lin, Conrad Borchers, Meng Cao, Xiangen Hu
Learning performance data (e. g., quiz scores and attempts) is significant for understanding learner engagement and knowledge mastery level.
1 code implementation • 17 Dec 2023 • Conrad Borchers, Yeyu Wang, Shamya Karumbaiah, Muhammad Ashiq, David Williamson Shaffer, Vincent Aleven
Taken together, offering early conceptual support to students with low learning rates could make classroom practice with AI tutors more effective.
1 code implementation • 9 Dec 2023 • Conrad Borchers, Jiayi Zhang, Ryan S. Baker, Vincent Aleven
We discuss system re-design opportunities to add SRL support during stages of processing and paths forward for using machine learning to speed research depending on the assessment of SRL based on transcription of think-aloud data.
1 code implementation • 20 Dec 2022 • Conrad Borchers, Zachary A. Pardos
Course load analytics (CLA) inferred from LMS and enrollment features can offer a more accurate representation of course workload to students than credit hours and potentially aid in their course selection decisions.
1 code implementation • NAACL (GeBNLP) 2022 • Conrad Borchers, Dalia Sara Gala, Benjamin Gilburt, Eduard Oravkin, Wilfried Bounsi, Yuki M. Asano, Hannah Rose Kirk
The growing capability and availability of generative language models has enabled a wide range of new downstream tasks.