Search Results for author: Naiming Liu

Found 15 papers, 8 papers with code

LLM-based Cognitive Models of Students with Misconceptions

no code implementations16 Oct 2024 Shashank Sonkar, Xinghe Chen, Naiming Liu, Richard G. Baraniuk, Mrinmaya Sachan

Our findings reveal that LLMs trained on misconception examples can efficiently learn to replicate errors.

Misconceptions

MalAlgoQA: Pedagogical Evaluation of Counterfactual Reasoning in Large Language Models and Implications for AI in Education

1 code implementation1 Jul 2024 Naiming Liu, Shashank Sonkar, MyCo Le, Richard Baraniuk

We propose the Malgorithm Identification task, where LLMs are assessed based on their ability to identify corresponding malgorithm given an incorrect answer choice.

counterfactual Counterfactual Reasoning +2

Synthetic Context Generation for Question Generation

no code implementations19 Jun 2024 Naiming Liu, Zichao Wang, Richard Baraniuk

Despite rapid advancements in large language models (LLMs), QG remains a challenging problem due to its complicated process, open-ended nature, and the diverse settings in which question generation occurs.

Question Generation Question-Generation

Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning

no code implementations23 Apr 2024 Shashank Sonkar, Naiming Liu, Richard G. Baraniuk

Our findings reveal significant declines in the models' performance across these diverse benchmarks, indicating a broad impact on their capabilities when trained to model student behavior.

ARC Common Sense Reasoning +4

Marking: Visual Grading with Highlighting Errors and Annotating Missing Bits

no code implementations22 Apr 2024 Shashank Sonkar, Naiming Liu, Debshila B. Mallick, Richard G. Baraniuk

We subsequently train language models to identify entailment, contradiction, and neutrality from student response, akin to NLI, and with the added dimension of identifying omissions from gold answers.

Natural Language Inference

Code Soliloquies for Accurate Calculations in Large Language Models

1 code implementation21 Sep 2023 Shashank Sonkar, MyCo Le, Xinghe Chen, Naiming Liu, Debshila Basu Mallick, Richard G. Baraniuk

Our approach notably enhances the quality of synthetic conversation datasets, especially for subjects that are calculation-intensive.

Language Modelling Large Language Model +1

CLASS: A Design Framework for building Intelligent Tutoring Systems based on Learning Science principles

1 code implementation22 May 2023 Shashank Sonkar, Naiming Liu, Debshila Basu Mallick, Richard G. Baraniuk

We present a design framework called Conversational Learning with Analytical Step-by-Step Strategies (CLASS) for building advanced Intelligent Tutoring Systems (ITS) powered by high-performance Large Language Models (LLMs).

Chatbot Decision Making

A Visual Tour Of Current Challenges In Multimodal Language Models

no code implementations22 Oct 2022 Shashank Sonkar, Naiming Liu, Richard G. Baraniuk

Transformer models trained on massive text corpora have become the de facto models for a wide range of natural language processing tasks.

Text-to-Image Generation Visual Grounding

Automated Scoring for Reading Comprehension via In-context BERT Tuning

1 code implementation19 May 2022 Nigel Fernandez, Aritra Ghosh, Naiming Liu, Zichao Wang, Benoît Choffin, Richard Baraniuk, Andrew Lan

Our approach, in-context BERT fine-tuning, produces a single shared scoring model for all items with a carefully-designed input structure to provide contextual information on each item.

Reading Comprehension

NeuroView-RNN: It's About Time

no code implementations23 Feb 2022 CJ Barberan, Sina AlEMohammad, Naiming Liu, Randall Balestriero, Richard G. Baraniuk

A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner.

Decision Making Time Series +1

GPT-based Open-Ended Knowledge Tracing

1 code implementation21 Feb 2022 Naiming Liu, Zichao Wang, Richard G. Baraniuk, Andrew Lan

In education applications, knowledge tracing refers to the problem of estimating students' time-varying concept/skill mastery level from their past responses to questions and predicting their future performance.

Code Generation Knowledge Tracing +3

NFT-K: Non-Fungible Tangent Kernels

1 code implementation11 Oct 2021 Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk

To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.

Cannot find the paper you are looking for? You can Submit a new open access paper.