Search Results for author: Cristina Conati

Found 6 papers, 2 papers with code

Cascading Convolutional Temporal Colour Constancy

1 code implementation15 Jun 2021 Matteo Rizzo, Cristina Conati, Daesik Jang, Hui Hu

We extend this architecture with different models obtained by (i) substituting the TCCNet submodules with C4, the state-of-the-art method for CCC targeting images; (ii) adding a cascading strategy to perform an iterative improvement of the estimate of the illuminant.

A Framework to Counteract Suboptimal User-Behaviors in Exploratory Learning Environments: an Application to MOOCs

no code implementations14 Jun 2021 Sébastien Lallé, Cristina Conati

While there is evidence that user-adaptive support can greatly enhance the effectiveness of educational systems, designing such support for exploratory learning environments (e. g., simulations) is still challenging due to the open-ended nature of their interaction.

A Neural Architecture for Detecting Confusion in Eye-tracking Data

no code implementations13 Mar 2020 Shane Sims, Cristina Conati

Encouraged by the success of deep learning in a variety of domains, we investigate a novel application of its methods on the effectiveness of detecting user confusion in eye-tracking data.

Eye Tracking

Toward Personalized XAI: A Case Study in Intelligent Tutoring Systems

no code implementations10 Dec 2019 Cristina Conati, Oswald Barral, Vanessa Putnam, Lea Rieger

In addition, we show that students' access of the explanation and learning gains are modulated by user characteristics, providing insights toward designing personalized Explainable AI (XAI) for ITS.

Predicting Confusion from Eye-Tracking Data with Recurrent Neural Networks

1 code implementation19 Jun 2019 Shane D. Sims, Vanessa Putnam, Cristina Conati

Encouraged by the success of deep learning in a variety of domains, we investigate the suitability and effectiveness of Recurrent Neural Networks (RNNs) in a domain where deep learning has not yet been used; namely detecting confusion from eye-tracking data.

Data Augmentation Eye Tracking

AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling

no code implementations30 Jun 2018 Cristina Conati, Kaska Porayska-Pomsta, Manolis Mavrikis

We argue that this work can provide a valuable starting point for a framework of interpretable AI, and as such is of relevance to the application of both knowledge-based and machine learning systems in other high-stakes contexts, beyond education.

Interpretable Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.