Search Results for author: Vinitra Swamy

Found 11 papers, 10 papers with code

InterpretCC: Conditional Computation for Inherently Interpretable Neural Networks

1 code implementation5 Feb 2024 Vinitra Swamy, Julian Blackwell, Jibril Frej, Martin Jaggi, Tanja Käser

Real-world interpretability for neural networks is a tradeoff between three concerns: 1) it requires humans to trust the explanation approximation (e. g. post-hoc approaches), 2) it compromises the understandability of the explanation (e. g. automatically identified feature masks), and 3) it compromises the model performance (e. g. decision trees).

News Classification

Unraveling Downstream Gender Bias from Large Language Models: A Study on AI Educational Writing Assistance

1 code implementation6 Nov 2023 Thiemo Wambsganss, Xiaotian Su, Vinitra Swamy, Seyed Parsa Neshaei, Roman Rietsche, Tanja Käser

Our results demonstrate that there is no significant difference in gender bias between the resulting peer reviews of groups with and without LLM suggestions.

Sentence Sentence Embedding +1

MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks

1 code implementation25 Sep 2023 Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser, Mary-Anne Hartley

Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space.

The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations

no code implementations1 Jul 2023 Vinitra Swamy, Jibril Frej, Tanja Käser

Explainable Artificial Intelligence (XAI) plays a crucial role in enabling human understanding and trust in deep learning systems, often defined as determining which features are most important to a model's prediction.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

1 code implementation17 Dec 2022 Vinitra Swamy, Sijia Du, Mirko Marras, Tanja Käser

Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency.

Explainable artificial intelligence

Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling

2 code implementations COLING 2022 Thiemo Wambsganss, Vinitra Swamy, Roman Rietsche, Tanja Käser

We conduct a Word Embedding Association Test (WEAT) analysis on (1) our collected corpus in connection with the clustered labels, (2) the most common pre-trained German language models (T5, BERT, and GPT-2) and GloVe embeddings, and (3) the language models after fine-tuning on our collected data-set.

Meta Transfer Learning for Early Success Prediction in MOOCs

2 code implementations25 Apr 2022 Vinitra Swamy, Mirko Marras, Tanja Käser

Despite the increasing popularity of massive open online courses (MOOCs), many suffer from high dropout and low success rates.

Transfer Learning

Interpreting Language Models Through Knowledge Graph Extraction

1 code implementation16 Nov 2021 Vinitra Swamy, Angelika Romanou, Martin Jaggi

In this paper, we compare BERT-based language models through snapshots of acquired knowledge at sequential stages of the training process.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.