Search Results for author: Inga Strümke

Found 24 papers, 9 papers with code

Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review

no code implementations7 Nov 2024 Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, Adam Hilbert

The joint implementation of Federated learning (FL) and Explainable artificial intelligence (XAI) will allow training models from distributed data and explaining their inner workings while preserving important aspects of privacy.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

A transformer-based deep reinforcement learning approach to spatial navigation in a partially observable Morris Water Maze

no code implementations1 Oct 2024 Marte Eggen, Inga Strümke

Navigation is a fundamental cognitive skill extensively studied in neuroscientific experiments and has lately gained substantial interest in artificial intelligence research.

Decision Making Decoder +1

Lightweight Neural Architecture Search for Cerebral Palsy Detection

1 code implementation30 Sep 2024 Felix Tempel, Espen Alexander F. Ihlen, Inga Strümke

Our method performs better on a real-world CP dataset than other approaches in the field, which rely on large ensembles.

Neural Architecture Search

From Movements to Metrics: Evaluating Explainable AI Methods in Skeleton-Based Human Activity Recognition

no code implementations20 Feb 2024 Kimji N. Pellano, Inga Strümke, Espen Alexander F. Ihlen

The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human-computer interaction.

Human Activity Recognition

AutoGCN -- Towards Generic Human Activity Recognition with Neural Architecture Search

1 code implementation2 Feb 2024 Felix Tempel, Inga Strümke, Espen Alexander F. Ihlen

This paper introduces AutoGCN, a generic Neural Architecture Search (NAS) algorithm for Human Activity Recognition (HAR) using Graph Convolution Networks (GCNs).

Action Recognition Human Activity Recognition +2

Lecture Notes in Probabilistic Diffusion Models

no code implementations16 Dec 2023 Inga Strümke, Helge Langseth

The diffusion model learns the data manifold to which the original and thus the reconstructed data samples belong, by training on a large number of data points.

Information based explanation methods for deep learning agents -- with applications on large open-source chess models

1 code implementation18 Sep 2023 Patrik Hammersborg, Inga Strümke

With large chess-playing neural network models like AlphaZero contesting the state of the art within the world of computerised chess, two challenges present themselves: The question of how to explain the domain knowledge internalised by such models, and the problem that such models are not made openly available.

Concept backpropagation: An Explainable AI approach for visualising learned concepts in neural network models

1 code implementation24 Jul 2023 Patrik Hammersborg, Inga Strümke

Neural network models are widely used in a variety of domains, often as black-box solutions, since they are not directly interpretable for humans.

Explainable artificial intelligence

Against Algorithmic Exploitation of Human Vulnerabilities

no code implementations12 Jan 2023 Inga Strümke, Marija Slavkovik, Clemens Stachl

Decisions such as which movie to watch next, which song to listen to, or which product to buy online, are increasingly influenced by recommender systems and user models that incorporate information on users' past behaviours, preferences, and digitally created content.

Decision Making Ethics +2

Reinforcement Learning in an Adaptable Chess Environment for Detecting Human-understandable Concepts

1 code implementation10 Nov 2022 Patrik Hammersborg, Inga Strümke

Self-trained autonomous agents developed using machine learning are showing great promise in a variety of control settings, perhaps most remarkably in applications involving autonomous vehicles.

Autonomous Vehicles reinforcement-learning +2

Predicting tacrolimus exposure in kidney transplanted patients using machine learning

no code implementations9 May 2022 Andrea M. Storås, Anders Åsberg, Pål Halvorsen, Michael A. Riegler, Inga Strümke

Tacrolimus is one of the cornerstone immunosuppressive drugs in most transplantation centers worldwide following solid organ transplantation.

BIG-bench Machine Learning

Visual explanations for polyp detection: How medical doctors assess intrinsic versus extrinsic explanations

1 code implementation23 Mar 2022 Steven Hicks, Andrea Storås, Michael Riegler, Cise Midoglu, Malek Hammou, Thomas de Lange, Sravanthi Parasa, Pål Halvorsen, Inga Strümke

Deep learning has in recent years achieved immense success in all areas of computer vision and has the potential of assisting medical doctors in analyzing visual content for disease and other abnormalities.

Explainable artificial intelligence

Interpretable machine learning in Physics

no code implementations11 Mar 2022 Christophe Grojean, Ayan Paul, Zhuoni Qian, Inga Strümke

Adding interpretability to multivariate methods creates a powerful synergy for exploring complex physical systems with higher order correlations while bringing about a degree of clarity in the underlying dynamics of the system.

BIG-bench Machine Learning Interpretable Machine Learning

Explaining a Deep Reinforcement Learning Docking Agent Using Linear Model Trees with User Adapted Visualization

no code implementations1 Mar 2022 Vilde B. Gjærum, Inga Strümke, Ole Andreas Alsos, Anastasios M. Lekkas

The main contributions of this work are (1) significantly improving both the accuracy and the build time of a greedy approach for building LMTs by introducing ordering of features in the splitting of the tree, (2) giving an overview of the characteristics of the seafarer/operator and the developer as two different end-users of the agent and receiver of the explanations, and (3) suggesting a visualization of the docking agent, the environment, and the feature attributions given by the LMT for when the developer is the end-user of the system, and another visualization for when the seafarer or operator is the end-user, based on their different characteristics.

Explainable artificial intelligence Reinforcement Learning (RL)

Inferring feature importance with uncertainties in high-dimensional data

1 code implementation2 Sep 2021 Pål Vegard Johnsen, Inga Strümke, Signe Riemer-Sørensen, Andrew Thomas DeWan, Mette Langaas

We build upon the recently published feature importance measure of SAGE (Shapley additive global importance) and introduce sub-SAGE which can be estimated without resampling for tree-based models.

Feature Importance Vocal Bursts Intensity Prediction

Artificial Intelligence in Dry Eye Disease

no code implementations2 Sep 2021 Andrea M. Storås, Inga Strümke, Michael A. Riegler, Jakob Grauslund, Hugo L. Hammer, Anis Yazidi, Pål Halvorsen, Kjell G. Gundersen, Tor P. Utheim, Catherine Jackson

Although the term `AI' is commonly used, recent success in its applications to medicine is mainly due to advancements in the sub-field of machine learning, which has been used to automatically classify images and predict medical outcomes.

Beyond Cuts in Small Signal Scenarios -- Enhanced Sneutrino Detectability Using Machine Learning

1 code implementation6 Aug 2021 Daniel Alvestad, Nikolai Fomin, Jörn Kersten, Steffen Maeland, Inga Strümke

We investigate enhancing the sensitivity of new physics searches at the LHC by machine learning in the case of background dominance and a high degree of overlap between the observables for signal and background.

BIG-bench Machine Learning

The social dilemma in artificial intelligence development and why we have to solve it

no code implementations27 Jul 2021 Inga Strümke, Marija Slavkovik, Vince I. Madai

While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines.

Ethics

Shapley values for feature selection: The good, the bad, and the axioms

no code implementations22 Feb 2021 Daniel Fryer, Inga Strümke, Hien Nguyen

The Shapley value has become popular in the Explainable AI (XAI) literature, thanks, to a large extent, to a solid theoretical foundation, including four "favourable and fair" axioms for attribution in transferable utility games.

Explainable Artificial Intelligence (XAI) feature selection

Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies

no code implementations12 Jul 2020 Daniel Vidali Fryer, Inga Strümke, Hien Nguyen

To the best of our knowledge, all existing game formulations in the machine learning and statistics literature fall into a category which we name the model-dependent category of game formulations.

Attribute BIG-bench Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.