no code implementations • 7 Nov 2024 • Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, Adam Hilbert
The joint implementation of Federated learning (FL) and Explainable artificial intelligence (XAI) will allow training models from distributed data and explaining their inner workings while preserving important aspects of privacy.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
1 code implementation • 6 Nov 2024 • Felix Tempel, Espen Alexander F. Ihlen, Lars Adde, Inga Strümke
This perturbation enables a judgment on whether the body key points are truly influential or non-influential based on the SHAP values.
no code implementations • 1 Oct 2024 • Marte Eggen, Inga Strümke
Navigation is a fundamental cognitive skill extensively studied in neuroscientific experiments and has lately gained substantial interest in artificial intelligence research.
1 code implementation • 30 Sep 2024 • Felix Tempel, Espen Alexander F. Ihlen, Inga Strümke
Our method performs better on a real-world CP dataset than other approaches in the field, which rely on large ensembles.
no code implementations • 20 Feb 2024 • Kimji N. Pellano, Inga Strümke, Espen Alexander F. Ihlen
The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human-computer interaction.
1 code implementation • 2 Feb 2024 • Felix Tempel, Inga Strümke, Espen Alexander F. Ihlen
This paper introduces AutoGCN, a generic Neural Architecture Search (NAS) algorithm for Human Activity Recognition (HAR) using Graph Convolution Networks (GCNs).
Ranked #61 on Skeleton Based Action Recognition on NTU RGB+D
no code implementations • 16 Dec 2023 • Inga Strümke, Helge Langseth
The diffusion model learns the data manifold to which the original and thus the reconstructed data samples belong, by training on a large number of data points.
1 code implementation • 18 Sep 2023 • Patrik Hammersborg, Inga Strümke
With large chess-playing neural network models like AlphaZero contesting the state of the art within the world of computerised chess, two challenges present themselves: The question of how to explain the domain knowledge internalised by such models, and the problem that such models are not made openly available.
1 code implementation • 24 Jul 2023 • Patrik Hammersborg, Inga Strümke
Neural network models are widely used in a variety of domains, often as black-box solutions, since they are not directly interpretable for humans.
no code implementations • 12 Jan 2023 • Inga Strümke, Marija Slavkovik, Clemens Stachl
Decisions such as which movie to watch next, which song to listen to, or which product to buy online, are increasingly influenced by recommender systems and user models that incorporate information on users' past behaviours, preferences, and digitally created content.
1 code implementation • 10 Nov 2022 • Patrik Hammersborg, Inga Strümke
Self-trained autonomous agents developed using machine learning are showing great promise in a variety of control settings, perhaps most remarkably in applications involving autonomous vehicles.
no code implementations • 9 May 2022 • Andrea M. Storås, Anders Åsberg, Pål Halvorsen, Michael A. Riegler, Inga Strümke
Tacrolimus is one of the cornerstone immunosuppressive drugs in most transplantation centers worldwide following solid organ transplantation.
1 code implementation • 23 Mar 2022 • Steven Hicks, Andrea Storås, Michael Riegler, Cise Midoglu, Malek Hammou, Thomas de Lange, Sravanthi Parasa, Pål Halvorsen, Inga Strümke
Deep learning has in recent years achieved immense success in all areas of computer vision and has the potential of assisting medical doctors in analyzing visual content for disease and other abnormalities.
no code implementations • 11 Mar 2022 • Christophe Grojean, Ayan Paul, Zhuoni Qian, Inga Strümke
Adding interpretability to multivariate methods creates a powerful synergy for exploring complex physical systems with higher order correlations while bringing about a degree of clarity in the underlying dynamics of the system.
no code implementations • 1 Mar 2022 • Vilde B. Gjærum, Inga Strümke, Ole Andreas Alsos, Anastasios M. Lekkas
The main contributions of this work are (1) significantly improving both the accuracy and the build time of a greedy approach for building LMTs by introducing ordering of features in the splitting of the tree, (2) giving an overview of the characteristics of the seafarer/operator and the developer as two different end-users of the agent and receiver of the explanations, and (3) suggesting a visualization of the docking agent, the environment, and the feature attributions given by the LMT for when the developer is the end-user of the system, and another visualization for when the seafarer or operator is the end-user, based on their different characteristics.
Explainable artificial intelligence Reinforcement Learning (RL)
no code implementations • 1 Mar 2022 • Inga Strümke, Marija Slavkovik
But can we know that this is what a model does?
BIG-bench Machine Learning Explainable artificial intelligence +2
no code implementations • 18 Jan 2022 • Tannista Banerjee, Ayan Paul, Vishak Srikanth, Inga Strümke
The analysis of causation is a challenging task that can be approached in various ways.
no code implementations • 4 Nov 2021 • Sindre Benjamin Remman, Inga Strümke, Anastasios M. Lekkas
This partial causal ordering defines the causal relations between the features, and we specify this using domain knowledge about the lever control task.
Explainable artificial intelligence Reinforcement Learning (RL)
1 code implementation • 2 Sep 2021 • Pål Vegard Johnsen, Inga Strümke, Signe Riemer-Sørensen, Andrew Thomas DeWan, Mette Langaas
We build upon the recently published feature importance measure of SAGE (Shapley additive global importance) and introduce sub-SAGE which can be estimated without resampling for tree-based models.
no code implementations • 2 Sep 2021 • Andrea M. Storås, Inga Strümke, Michael A. Riegler, Jakob Grauslund, Hugo L. Hammer, Anis Yazidi, Pål Halvorsen, Kjell G. Gundersen, Tor P. Utheim, Catherine Jackson
Although the term `AI' is commonly used, recent success in its applications to medicine is mainly due to advancements in the sub-field of machine learning, which has been used to automatically classify images and predict medical outcomes.
1 code implementation • 6 Aug 2021 • Daniel Alvestad, Nikolai Fomin, Jörn Kersten, Steffen Maeland, Inga Strümke
We investigate enhancing the sensitivity of new physics searches at the LHC by machine learning in the case of background dominance and a high degree of overlap between the observables for signal and background.
no code implementations • 27 Jul 2021 • Inga Strümke, Marija Slavkovik, Vince I. Madai
While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines.
no code implementations • 22 Feb 2021 • Daniel Fryer, Inga Strümke, Hien Nguyen
The Shapley value has become popular in the Explainable AI (XAI) literature, thanks, to a large extent, to a solid theoretical foundation, including four "favourable and fair" axioms for attribution in transferable utility games.
no code implementations • 12 Jul 2020 • Daniel Vidali Fryer, Inga Strümke, Hien Nguyen
To the best of our knowledge, all existing game formulations in the machine learning and statistics literature fall into a category which we name the model-dependent category of game formulations.