Search Results for author: Inga Strümke

Found 20 papers, 7 papers with code

From Movements to Metrics: Evaluating Explainable AI Methods in Skeleton-Based Human Activity Recognition

no code implementations20 Feb 2024 Kimji N. Pellano, Inga Strümke, Espen Alexander F. Ihlen

The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human-computer interaction.

Human Activity Recognition

AutoGCN -- Towards Generic Human Activity Recognition with Neural Architecture Search

1 code implementation2 Feb 2024 Felix Tempel, Inga Strümke, Espen Alexander F. Ihlen

This paper introduces AutoGCN, a generic Neural Architecture Search (NAS) algorithm for Human Activity Recognition (HAR) using Graph Convolution Networks (GCNs).

Action Recognition Human Activity Recognition +2

Lecture Notes in Probabilistic Diffusion Models

no code implementations16 Dec 2023 Inga Strümke, Helge Langseth

The diffusion model learns the data manifold to which the original and thus the reconstructed data samples belong, by training on a large number of data points.

Information based explanation methods for deep learning agents -- with applications on large open-source chess models

1 code implementation18 Sep 2023 Patrik Hammersborg, Inga Strümke

With large chess-playing neural network models like AlphaZero contesting the state of the art within the world of computerised chess, two challenges present themselves: The question of how to explain the domain knowledge internalised by such models, and the problem that such models are not made openly available.

Concept backpropagation: An Explainable AI approach for visualising learned concepts in neural network models

1 code implementation24 Jul 2023 Patrik Hammersborg, Inga Strümke

Neural network models are widely used in a variety of domains, often as black-box solutions, since they are not directly interpretable for humans.

Explainable artificial intelligence

Against Algorithmic Exploitation of Human Vulnerabilities

no code implementations12 Jan 2023 Inga Strümke, Marija Slavkovik, Clemens Stachl

Decisions such as which movie to watch next, which song to listen to, or which product to buy online, are increasingly influenced by recommender systems and user models that incorporate information on users' past behaviours, preferences, and digitally created content.

Decision Making Ethics +2

Reinforcement Learning in an Adaptable Chess Environment for Detecting Human-understandable Concepts

1 code implementation10 Nov 2022 Patrik Hammersborg, Inga Strümke

Self-trained autonomous agents developed using machine learning are showing great promise in a variety of control settings, perhaps most remarkably in applications involving autonomous vehicles.

Autonomous Vehicles reinforcement-learning +2

Predicting tacrolimus exposure in kidney transplanted patients using machine learning

no code implementations9 May 2022 Andrea M. Storås, Anders Åsberg, Pål Halvorsen, Michael A. Riegler, Inga Strümke

Tacrolimus is one of the cornerstone immunosuppressive drugs in most transplantation centers worldwide following solid organ transplantation.

BIG-bench Machine Learning

Visual explanations for polyp detection: How medical doctors assess intrinsic versus extrinsic explanations

1 code implementation23 Mar 2022 Steven Hicks, Andrea Storås, Michael Riegler, Cise Midoglu, Malek Hammou, Thomas de Lange, Sravanthi Parasa, Pål Halvorsen, Inga Strümke

Deep learning has in recent years achieved immense success in all areas of computer vision and has the potential of assisting medical doctors in analyzing visual content for disease and other abnormalities.

Explainable artificial intelligence

Interpretable machine learning in Physics

no code implementations11 Mar 2022 Christophe Grojean, Ayan Paul, Zhuoni Qian, Inga Strümke

Adding interpretability to multivariate methods creates a powerful synergy for exploring complex physical systems with higher order correlations while bringing about a degree of clarity in the underlying dynamics of the system.

BIG-bench Machine Learning Interpretable Machine Learning

Explaining a Deep Reinforcement Learning Docking Agent Using Linear Model Trees with User Adapted Visualization

no code implementations1 Mar 2022 Vilde B. Gjærum, Inga Strümke, Ole Andreas Alsos, Anastasios M. Lekkas

The main contributions of this work are (1) significantly improving both the accuracy and the build time of a greedy approach for building LMTs by introducing ordering of features in the splitting of the tree, (2) giving an overview of the characteristics of the seafarer/operator and the developer as two different end-users of the agent and receiver of the explanations, and (3) suggesting a visualization of the docking agent, the environment, and the feature attributions given by the LMT for when the developer is the end-user of the system, and another visualization for when the seafarer or operator is the end-user, based on their different characteristics.

Explainable artificial intelligence Reinforcement Learning (RL)

Artificial Intelligence in Dry Eye Disease

no code implementations2 Sep 2021 Andrea M. Storås, Inga Strümke, Michael A. Riegler, Jakob Grauslund, Hugo L. Hammer, Anis Yazidi, Pål Halvorsen, Kjell G. Gundersen, Tor P. Utheim, Catherine Jackson

Although the term `AI' is commonly used, recent success in its applications to medicine is mainly due to advancements in the sub-field of machine learning, which has been used to automatically classify images and predict medical outcomes.

Inferring feature importance with uncertainties in high-dimensional data

1 code implementation2 Sep 2021 Pål Vegard Johnsen, Inga Strümke, Signe Riemer-Sørensen, Andrew Thomas DeWan, Mette Langaas

We build upon the recently published feature importance measure of SAGE (Shapley additive global importance) and introduce sub-SAGE which can be estimated without resampling for tree-based models.

Feature Importance Vocal Bursts Intensity Prediction

Beyond Cuts in Small Signal Scenarios -- Enhanced Sneutrino Detectability Using Machine Learning

1 code implementation6 Aug 2021 Daniel Alvestad, Nikolai Fomin, Jörn Kersten, Steffen Maeland, Inga Strümke

We investigate enhancing the sensitivity of new physics searches at the LHC by machine learning in the case of background dominance and a high degree of overlap between the observables for signal and background.

BIG-bench Machine Learning

The social dilemma in artificial intelligence development and why we have to solve it

no code implementations27 Jul 2021 Inga Strümke, Marija Slavkovik, Vince I. Madai

While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines.

Ethics

Shapley values for feature selection: The good, the bad, and the axioms

no code implementations22 Feb 2021 Daniel Fryer, Inga Strümke, Hien Nguyen

The Shapley value has become popular in the Explainable AI (XAI) literature, thanks, to a large extent, to a solid theoretical foundation, including four "favourable and fair" axioms for attribution in transferable utility games.

Explainable Artificial Intelligence (XAI) feature selection

Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies

no code implementations12 Jul 2020 Daniel Vidali Fryer, Inga Strümke, Hien Nguyen

To the best of our knowledge, all existing game formulations in the machine learning and statistics literature fall into a category which we name the model-dependent category of game formulations.

Attribute BIG-bench Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.