[Re] Explaining Groups of Points in Low-Dimensional Representations

Scope of Reproducibility
In this paper we present an analysis and elaboration of [1], in which an algorithm is posed by Plumb et al. for the purpose of finding human-understandable explanations in terms of given explainable features of input data for differences between groups of points occurring in a lower-dimensional representation of that input data.

Methodology
We have upgraded the original code provided by the authors such that it is compatible with recent versions of popular deep learning frameworks, namely the TensorFlow 2.x- and PyTorch 1.7.x-libraries. Furthermore, we have created our own implementation of the algorithm in which we have incorporated additional experiments in order to evaluate the algorithmʼs relevance in the scope of different dimensionality reduction techniques and differently structured data. We have performed the same experiments as described in the original paper using both the upgraded version of the code and our own implementation taking the authorsʼ code and paper as references.

Results
The results presented in [1] were reproducible, both by using the provided code and our own implementation. Our additional experiments have highlighted several limitations of the explanatory algorithm in question: the algorithm severely relies on the shape and variance of the clusters present in the data (and, if applicable, the method used to label these clusters), and highly non-linear dimensionality reduction algorithms perform worse in terms of explainability.

What was easy
The authors have provided an implementation that cleanly separates different experiments on different datasets and the core functional methodology. Given a working environment, it is easy to reproduce the experiments performed in [1].

What was difficult
Minor difficulties were experienced in setting up the required environment for running the code provided by Plumb et al. locally (i.e. trivial changes in the code such as the usage of absolute paths and obtaining external dependencies). Evidently, it was timeconsuming to rewrite all corresponding code, including the architecture for the variational auto-encoder provided by an external package, scvis 0.1.02.

Communication with original authors
No communication with the original authors was required to reproduce their work.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here