no code implementations • 15 May 2025 • Daniel Weitekamp, Christopher MacLellan, Erik Harpstead, Kenneth Koedinger
Human learning relies on specialization -- distinct cognitive mechanisms working together to enable rapid learning.
no code implementations • 17 Jan 2025 • Vincent Aleven, Conrad Borchers, Yun Huang, Tomohiro Nagashima, Bruce McLaren, Paulo Carvalho, Octav Popescu, Jonathan Sewall, Kenneth Koedinger
This platform has been used to develop and conduct an estimated 147 research studies which have run in a wide variety of laboratory and real-world educational settings, including K-12 and higher education, and have addressed a wide range of research questions.
no code implementations • 26 Nov 2024 • Daniel Weitekamp, Erik Harpstead, Kenneth Koedinger
As AI2T learns it can accurately estimate its certainty of performing correctly on unseen problem steps using STAND: a self-aware precondition learning algorithm that outperforms state-of-the-art methods like XGBoost.
1 code implementation • 13 Sep 2024 • Qianou Ma, Weirui Peng, Chenyang Yang, Hua Shen, Kenneth Koedinger, Tongshuang Wu
Prompting LLMs for complex tasks (e. g., building a trip advisor chatbot) needs humans to clearly articulate customized requirements (e. g., "start the response with a tl;dr").
no code implementations • 11 Sep 2024 • Daniel Weitekamp, Kenneth Koedinger
STAND is a data-efficient and computationally efficient machine learning approach that produces better classification accuracy than popular approaches like XGBoost on small-data tabular classification problems like learning rule preconditions from interactive training.
no code implementations • 8 Jun 2023 • Qianou Ma, Tongshuang Wu, Kenneth Koedinger
The emergence of large-language models (LLMs) that excel at code generation and commercial products such as GitHub's Copilot has sparked interest in human-AI pair programming (referred to as "pAIr programming") where an AI system collaborates with a human programmer.
no code implementations • 25 Oct 2021 • Daniel Weitekamp, Christopher MacLellan, Erik Harpstead, Kenneth Koedinger
Recent advances in machine learning have made it possible to train artificially intelligent agents that perform with super-human accuracy on a great diversity of complex tasks.
no code implementations • 21 Jun 2018 • Devendra Singh Chaplot, Christopher MacLellan, Ruslan Salakhutdinov, Kenneth Koedinger
Secondly, for domains where a cognitive model is available, we show that representations learned through CogRL can be used to get accurate estimates of skill difficulty and learning rate parameters without using any student performance data.