no code implementations • 30 Jun 2018 • Cristina Conati, Kaska Porayska-Pomsta, Manolis Mavrikis
We argue that this work can provide a valuable starting point for a framework of interpretable AI, and as such is of relevance to the application of both knowledge-based and machine learning systems in other high-stakes contexts, beyond education.
1 code implementation • 19 Jun 2019 • Shane D. Sims, Vanessa Putnam, Cristina Conati
Encouraged by the success of deep learning in a variety of domains, we investigate the suitability and effectiveness of Recurrent Neural Networks (RNNs) in a domain where deep learning has not yet been used; namely detecting confusion from eye-tracking data.
no code implementations • 10 Dec 2019 • Cristina Conati, Oswald Barral, Vanessa Putnam, Lea Rieger
In addition, we show that students' access of the explanation and learning gains are modulated by user characteristics, providing insights toward designing personalized Explainable AI (XAI) for ITS.
no code implementations • 13 Mar 2020 • Shane Sims, Cristina Conati
Encouraged by the success of deep learning in a variety of domains, we investigate a novel application of its methods on the effectiveness of detecting user confusion in eye-tracking data.
no code implementations • 14 Jun 2021 • Sébastien Lallé, Cristina Conati
While there is evidence that user-adaptive support can greatly enhance the effectiveness of educational systems, designing such support for exploratory learning environments (e. g., simulations) is still challenging due to the open-ended nature of their interaction.
1 code implementation • 15 Jun 2021 • Matteo Rizzo, Cristina Conati, Daesik Jang, Hui Hu
We extend this architecture with different models obtained by (i) substituting the TCCNet submodules with C4, the state-of-the-art method for CCC targeting images; (ii) adding a cascading strategy to perform an iterative improvement of the estimate of the illuminant.
2 code implementations • 15 Nov 2022 • Matteo Rizzo, Cristina Conati, Daesik Jang, Hui Hu
The opacity of deep learning models constrains their debugging and improvement.
no code implementations • 29 Dec 2022 • Matteo Rizzo, Alberto Veneri, Andrea Albarelli, Claudio Lucchese, Marco Nobile, Cristina Conati
EXplainable Artificial Intelligence (XAI) is a vibrant research topic in the artificial intelligence community, with growing interest across methods and domains.
1 code implementation • 3 May 2023 • Fabio Hellmann, Silvan Mertes, Mohamed Benouis, Alexander Hustinx, Tzung-Chien Hsieh, Cristina Conati, Peter Krawitz, Elisabeth André
The effectiveness of the approach was assessed by evaluating its performance in removing identifiable facial attributes to increase the anonymity of the given individual face.
no code implementations • 21 Jun 2023 • Harshinee Sriram, Cristina Conati
We address the need to generate faithful explanations of "black box" Deep Learning models.
no code implementations • 22 Sep 2023 • Harshinee Sriram, Cristina Conati, Thalia Field
Existing research has shown the potential of classifying Alzheimers Disease (AD) from eye-tracking (ET) data with classifiers that rely on task-specific engineered features.
no code implementations • 6 Mar 2024 • Vedant Bahel, Harshinee Sriram, Cristina Conati
We investigate personalizing the explanations that an Intelligent Tutoring System generates to justify the hints it provides to students to foster their learning.