Search Results for author: Cristina Conati

Found 12 papers, 4 papers with code

AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling

no code implementations30 Jun 2018 Cristina Conati, Kaska Porayska-Pomsta, Manolis Mavrikis

We argue that this work can provide a valuable starting point for a framework of interpretable AI, and as such is of relevance to the application of both knowledge-based and machine learning systems in other high-stakes contexts, beyond education.

BIG-bench Machine Learning Interpretable Machine Learning

Predicting Confusion from Eye-Tracking Data with Recurrent Neural Networks

1 code implementation19 Jun 2019 Shane D. Sims, Vanessa Putnam, Cristina Conati

Encouraged by the success of deep learning in a variety of domains, we investigate the suitability and effectiveness of Recurrent Neural Networks (RNNs) in a domain where deep learning has not yet been used; namely detecting confusion from eye-tracking data.

Data Augmentation Specificity

Toward Personalized XAI: A Case Study in Intelligent Tutoring Systems

no code implementations10 Dec 2019 Cristina Conati, Oswald Barral, Vanessa Putnam, Lea Rieger

In addition, we show that students' access of the explanation and learning gains are modulated by user characteristics, providing insights toward designing personalized Explainable AI (XAI) for ITS.

Explainable Artificial Intelligence (XAI)

A Neural Architecture for Detecting Confusion in Eye-tracking Data

no code implementations13 Mar 2020 Shane Sims, Cristina Conati

Encouraged by the success of deep learning in a variety of domains, we investigate a novel application of its methods on the effectiveness of detecting user confusion in eye-tracking data.

Specificity

A Framework to Counteract Suboptimal User-Behaviors in Exploratory Learning Environments: an Application to MOOCs

no code implementations14 Jun 2021 Sébastien Lallé, Cristina Conati

While there is evidence that user-adaptive support can greatly enhance the effectiveness of educational systems, designing such support for exploratory learning environments (e. g., simulations) is still challenging due to the open-ended nature of their interaction.

Cascading Convolutional Temporal Colour Constancy

1 code implementation15 Jun 2021 Matteo Rizzo, Cristina Conati, Daesik Jang, Hui Hu

We extend this architecture with different models obtained by (i) substituting the TCCNet submodules with C4, the state-of-the-art method for CCC targeting images; (ii) adding a cascading strategy to perform an iterative improvement of the estimate of the illuminant.

A Theoretical Framework for AI Models Explainability with Application in Biomedicine

no code implementations29 Dec 2022 Matteo Rizzo, Alberto Veneri, Andrea Albarelli, Claudio Lucchese, Marco Nobile, Cristina Conati

EXplainable Artificial Intelligence (XAI) is a vibrant research topic in the artificial intelligence community, with growing interest across methods and domains.

Decision Making Explainable artificial intelligence +1

GANonymization: A GAN-based Face Anonymization Framework for Preserving Emotional Expressions

1 code implementation3 May 2023 Fabio Hellmann, Silvan Mertes, Mohamed Benouis, Alexander Hustinx, Tzung-Chien Hsieh, Cristina Conati, Peter Krawitz, Elisabeth André

The effectiveness of the approach was assessed by evaluating its performance in removing identifiable facial attributes to increase the anonymity of the given individual face.

Face Anonymization Generative Adversarial Network

Evaluating the overall sensitivity of saliency-based explanation methods

no code implementations21 Jun 2023 Harshinee Sriram, Cristina Conati

We address the need to generate faithful explanations of "black box" Deep Learning models.

Classification of Alzheimers Disease with Deep Learning on Eye-tracking Data

no code implementations22 Sep 2023 Harshinee Sriram, Cristina Conati, Thalia Field

Existing research has shown the potential of classifying Alzheimers Disease (AD) from eye-tracking (ET) data with classifiers that rely on task-specific engineered features.

Personalizing explanations of AI-driven hints to users' cognitive abilities: an empirical evaluation

no code implementations6 Mar 2024 Vedant Bahel, Harshinee Sriram, Cristina Conati

We investigate personalizing the explanations that an Intelligent Tutoring System generates to justify the hints it provides to students to foster their learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.