no code implementations • 12 Sep 2024 • Vinitra Swamy, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, Tanja Käser
iLLuMinaTE navigates three main stages - causal connection, explanation selection, and explanation presentation - with variations drawing from eight social science theories (e. g. Abnormal Conditions, Pearl's Model of Explanation, Necessity and Robustness Selection, Contrastive Explanation).
1 code implementation • 30 May 2024 • Elena Grazia Gado, Tommaso Martorella, Luca Zunino, Paola Mejia-Domenzain, Vinitra Swamy, Jibril Frej, Tanja Käser
Intelligent Tutoring Systems (ITS) enhance personalized learning by predicting student answers to provide immediate and customized instruction.
1 code implementation • 5 Feb 2024 • Vinitra Swamy, Syrielle Montariol, Julian Blackwell, Jibril Frej, Martin Jaggi, Tanja Käser
Interpretability for neural networks is a trade-off between three key requirements: 1) faithfulness of the explanation (i. e., how perfectly it explains the prediction), 2) understandability of the explanation by humans, and 3) model performance.
1 code implementation • 27 Nov 2023 • Zeming Chen, Alejandro Hernández Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf, Amirkeivan Mohtashami, Alexandre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet, Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, Antoine Bosselut
Large language models (LLMs) can potentially democratize access to medical knowledge.
Ranked #1 on Multiple Choice Question Answering (MCQA) on MedMCQA (Dev Set (Acc-%) metric)
1 code implementation • 6 Nov 2023 • Thiemo Wambsganss, Xiaotian Su, Vinitra Swamy, Seyed Parsa Neshaei, Roman Rietsche, Tanja Käser
Our results demonstrate that there is no significant difference in gender bias between the resulting peer reviews of groups with and without LLM suggestions.
1 code implementation • 25 Sep 2023 • Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser, Mary-Anne Hartley
Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space.
no code implementations • 1 Jul 2023 • Vinitra Swamy, Jibril Frej, Tanja Käser
We propose a shift from post-hoc explainability to designing interpretable neural network architectures.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 17 Dec 2022 • Vinitra Swamy, Sijia Du, Mirko Marras, Tanja Käser
Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency.
1 code implementation • 2 Dec 2022 • Mohammad Asadi, Vinitra Swamy, Jibril Frej, Julien Vignoud, Mirko Marras, Tanja Käser
Time series is the most prevalent form of input data for educational prediction tasks.
2 code implementations • COLING 2022 • Thiemo Wambsganss, Vinitra Swamy, Roman Rietsche, Tanja Käser
We conduct a Word Embedding Association Test (WEAT) analysis on (1) our collected corpus in connection with the clustered labels, (2) the most common pre-trained German language models (T5, BERT, and GPT-2) and GloVe embeddings, and (3) the language models after fine-tuning on our collected data-set.
1 code implementation • 1 Jul 2022 • Vinitra Swamy, Bahar Radmehr, Natasa Krco, Mirko Marras, Tanja Käser
Neural networks are ubiquitous in applied machine learning for education.
2 code implementations • 25 Apr 2022 • Vinitra Swamy, Mirko Marras, Tanja Käser
Despite the increasing popularity of massive open online courses (MOOCs), many suffer from high dropout and low success rates.
1 code implementation • 16 Nov 2021 • Vinitra Swamy, Angelika Romanou, Martin Jaggi
In this paper, we compare BERT-based language models through snapshots of acquired knowledge at sequential stages of the training process.