Search Results for author: Ahcène Boubekki

Found 4 papers, 3 papers with code

Pantypes: Diverse Representatives for Self-Explainable Models

1 code implementation14 Mar 2024 Rune Kjærsgaard, Ahcène Boubekki, Line Clemmensen

Prototypical self-explainable classifiers have emerged to meet the growing demand for interpretable AI systems.

Explainable Models Fairness

RELAX: Representation Learning Explainability

1 code implementation19 Dec 2021 Kristoffer K. Wickstrøm, Daniel J. Trosten, Sigurd Løkse, Ahcène Boubekki, Karl Øyvind Mikalsen, Michael C. Kampffmeyer, Robert Jenssen

Our approach can also model the uncertainty in its explanations, which is essential to produce trustworthy explanations.

Representation Learning

Joint Optimization of an Autoencoder for Clustering and Embedding

1 code implementation7 Dec 2020 Ahcène Boubekki, Michael Kampffmeyer, Robert Jenssen, Ulf Brefeld

That simple neural network, referred to as the clustering module, can be integrated into a deep autoencoder resulting in a deep clustering model able to jointly learn a clustering and an embedding.

Clustering Deep Clustering

Frame-based Data Factorizations

no code implementations ICML 2017 Sebastian Mair, Ahcène Boubekki, Ulf Brefeld

Archetypal Analysis is the method of choice to compute interpretable matrix factorizations.

Cannot find the paper you are looking for? You can Submit a new open access paper.