no code implementations • ECCV 2020 • Van Nhan Nguyen, Sigurd Løkse, Kristoffer Wickstrøm, Michael Kampffmeyer, Davide Roverso, Robert Jenssen
In this paper, we equip Prototypical Networks (PNs) with a novel dissimilarity measure to enable discriminative feature normalization for few-shot learning.
1 code implementation • 7 Dec 2024 • Kristoffer Wickstrøm, Marina Marie-Claire Höhne, Anna Hedström
The lack of ground truth explanation labels is a fundamental challenge for quantitative evaluation in explainable artificial intelligence (XAI).
1 code implementation • 18 May 2022 • Kristoffer Wickstrøm, J. Emmanuel Johnson, Sigurd Løkse, Gustau Camps-Valls, Karl Øyvind Mikalsen, Michael Kampffmeyer, Robert Jenssen
Our proposed kernelized Taylor diagram is capable of visualizing similarities between populations with minimal assumptions of the data distributions.
1 code implementation • 17 Mar 2022 • Kristoffer Wickstrøm, Michael Kampffmeyer, Karl Øyvind Mikalsen, Robert Jenssen
The lack of labeled data is a key challenge for learning useful representation from time series data.
1 code implementation • 16 Oct 2020 • Kristoffer Wickstrøm, Karl Øyvind Mikalsen, Michael Kampffmeyer, Arthur Revhaug, Robert Jenssen
A measure of uncertainty in the relevance scores is computed by taking the standard deviation across the relevance scores produced by each model in the ensemble, which in turn is used to make the explanations more reliable.
no code implementations • 25 Sep 2019 • Kristoffer Wickstrøm, Sigurd Løkse, Michael Kampffmeyer, Shujian Yu, Jose Principe, Robert Jenssen
In this paper, we propose an IP analysis using the new matrix--based R\'enyi's entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data.
no code implementations • 25 Sep 2019 • Kristoffer Wickstrøm, Sigurd Løkse, Michael Kampffmeyer, Shujian Yu, Jose Principe, Robert Jenssen
In this paper, we propose an IP analysis using the new matrix--based R\'enyi's entropy coupled with tensor kernels over convolutional layers, leveraging the power of kernel methods to represent properties of the probability distribution independently of the dimensionality of the data.
no code implementations • 18 Apr 2018 • Shujian Yu, Kristoffer Wickstrøm, Robert Jenssen, Jose C. Principe
The matrix-based Renyi's \alpha-entropy functional and its multivariate extension were recently developed in terms of the normalized eigenspectrum of a Hermitian matrix of the projected data in a reproducing kernel Hilbert space (RKHS).