Search Results for author: Anna Saranti

Found 6 papers, 4 papers with code

Be Careful When Evaluating Explanations Regarding Ground Truth

1 code implementation8 Nov 2023 Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek

Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.

Explaining and visualizing black-box models through counterfactual paths

1 code implementation15 Jul 2023 Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek

Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.

counterfactual Explainable Artificial Intelligence (XAI) +2

Graph-guided random forest for gene set selection

1 code implementation26 Aug 2021 Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger

To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.

KANDINSKYPatterns -- An experimental exploration environment for Pattern Analysis and Machine Intelligence

no code implementations28 Feb 2021 Andreas Holzinger, Anna Saranti, Heimo Mueller

Machine intelligence is very successful at standard recognition tasks when having high-quality training data.

Cannot find the paper you are looking for? You can Submit a new open access paper.