Search Results for author: Przemyslaw Biecek

Found 35 papers, 27 papers with code

Red-Teaming Segment Anything Model

1 code implementation2 Apr 2024 Krzysztof Jankowski, Bartlomiej Sobieski, Mateusz Kwiatkowski, Jakub Szulc, Michal Janik, Hubert Baniecki, Przemyslaw Biecek

Foundation models have emerged as pivotal tools, tackling many complex tasks through pre-training on vast datasets and subsequent fine-tuning for specific applications.

Image Segmentation Segmentation +2

Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI

no code implementations12 Mar 2024 Vladimir Zaigrajew, Hubert Baniecki, Lukasz Tulczyjew, Agata M. Wijata, Jakub Nalepa, Nicolas Longépé, Przemyslaw Biecek

Remote sensing (RS) applications in the space domain demand machine learning (ML) models that are reliable, robust, and quality-assured, making red teaming a vital approach for identifying and exposing potential flaws and biases.

Hyperspectral image analysis HYPERVIEW Challenge

Be Careful When Evaluating Explanations Regarding Ground Truth

1 code implementation8 Nov 2023 Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek

Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.

Glocal Explanations of Expected Goal Models in Soccer

1 code implementation29 Aug 2023 Mustafa Cavus, Adrian Stando, Przemyslaw Biecek

This paper introduces the glocal explanations (between local and global levels) of the expected goal models to enable performance analysis at the team and player levels by proposing the use of aggregated versions of the SHAP values and partial dependence profiles.

Descriptive Explainable artificial intelligence

Explaining and visualizing black-box models through counterfactual paths

1 code implementation15 Jul 2023 Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek

Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.

counterfactual Explainable Artificial Intelligence (XAI) +2

Adversarial attacks and defenses in explainable artificial intelligence: A survey

1 code implementation6 Jun 2023 Hubert Baniecki, Przemyslaw Biecek

Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions.

Decision Making Explainable artificial intelligence +2

Interpretable machine learning for time-to-event prediction in medicine and healthcare

3 code implementations17 Mar 2023 Hubert Baniecki, Bartlomiej Sobieski, Patryk Szatkowski, Przemyslaw Bombinski, Przemyslaw Biecek

Time-to-event prediction, e. g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications.

Decision Making Feature Importance +4

Performance is not enough: the story told by a Rashomon quartet

1 code implementation26 Feb 2023 Przemyslaw Biecek, Hubert Baniecki, Mateusz Krzyzinski, Dianne Cook

The usual goal of supervised learning is to find the best model, the one that optimizes a particular performance measure.

Performance, Opaqueness, Consequences, and Assumptions: Simple questions for responsible planning of machine learning solutions

no code implementations21 Aug 2022 Przemyslaw Biecek

With the help of the POCA method, preliminary requirements can be defined for the model-building process, so that costly model misspecification errors can be identified as soon as possible or even avoided.

Math

Exploring Local Explanations of Nonlinear Models Using Animated Linear Projections

no code implementations11 May 2022 Nicholas Spyrison, Dianne Cook, Przemyslaw Biecek

To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections and use the radial tour.

Explainable Artificial Intelligence (XAI) Feature Importance

Graph-guided random forest for gene set selection

1 code implementation26 Aug 2021 Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger

To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.

Fooling Partial Dependence via Data Poisoning

1 code implementation26 May 2021 Hubert Baniecki, Wojciech Kretowicz, Przemyslaw Biecek

We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.

Data Poisoning

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python

1 code implementation28 Dec 2020 Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek

The increasing amount of available data, computing power, and the constant pursuit for higher performance results in the growing complexity of predictive models.

BIG-bench Machine Learning Fairness

Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring

1 code implementation28 Sep 2020 Michael Bücker, Gero Szepannek, Alicja Gosiewska, Przemyslaw Biecek

This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable.

BIG-bench Machine Learning

The Grammar of Interactive Explanatory Model Analysis

1 code implementation1 May 2020 Hubert Baniecki, Dariusz Parzych, Przemyslaw Biecek

We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model increases the performance and confidence of human decision making.

BIG-bench Machine Learning Decision Making +1

Named Entity Recognition -- Is there a glass ceiling?

1 code implementation6 Oct 2019 Tomasz Stanislawek, Anna Wróblewska, Alicja Wójcicka, Daniel Ziembicki, Przemyslaw Biecek

A new enriched semantic annotation of errors for this data set and new diagnostic data sets are attached in the supplementary materials.

named-entity-recognition Named Entity Recognition +1

EPP: interpretable score of model predictive power

2 code implementations24 Aug 2019 Alicja Gosiewska, Mateusz Bakala, Katarzyna Woznica, Maciej Zwolinski, Przemyslaw Biecek

Second is, that for k-fold cross-validation, the model performance is in most cases calculated as an average performance from particular folds, which neglects the information how stable is the performance for different folds.

Binary Classification General Classification +1

Model Development Process

1 code implementation9 Jul 2019 Przemyslaw Biecek

Predictive modeling has an increasing number of applications in various fields.

The Landscape of R Packages for Automated Exploratory Data Analysis

1 code implementation27 Mar 2019 Mateusz Staniak, Przemyslaw Biecek

The increasing availability of large but noisy data sets with a large number of heterogeneous variables leads to the increasing interest in the automation of common tasks for data analysis.

Feature Engineering

Do Not Trust Additive Explanations

2 code implementations27 Mar 2019 Alicja Gosiewska, Przemyslaw Biecek

Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.

Additive models BIG-bench Machine Learning +2

SAFE ML: Surrogate Assisted Feature Extraction for Model Learning

4 code implementations28 Feb 2019 Alicja Gosiewska, Aleksandra Gacek, Piotr Lubon, Przemyslaw Biecek

Complex black-box predictive models may have high accuracy, but opacity causes problems like lack of trust, lack of stability, sensitivity to concept drift.

AutoML Feature Engineering

How much should you ask? On the question structure in QA systems.

no code implementations WS 2018 Barbara Rychalska, Dominika Basaj, Anna Wr{\'o}blewska, Przemyslaw Biecek

Datasets that boosted state-of-the-art solutions for Question Answering (QA) systems prove that it is possible to ask questions in natural language manner.

Question Answering valid

auditor: an R Package for Model-Agnostic Visual Validation and Diagnostics

4 code implementations19 Sep 2018 Alicja Gosiewska, Przemyslaw Biecek

With modern software it is easy to train even a~complex model that fits the training data and results in high accuracy on the test set.

How much should you ask? On the question structure in QA systems

no code implementations11 Sep 2018 Dominika Basaj, Barbara Rychalska, Przemyslaw Biecek, Anna Wroblewska

Datasets that boosted state-of-the-art solutions for Question Answering (QA) systems prove that it is possible to ask questions in natural language manner.

Question Answering valid

Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System

no code implementations WS 2018 Barbara Rychalska, Dominika Basaj, Przemyslaw Biecek, Anna Wroblewska

In this paper we present the results of an investigation of the importance of verbs in a deep learning QA system trained on SQuAD dataset.

DALEX: explainers for complex predictive models

1 code implementation23 Jun 2018 Przemyslaw Biecek

Presented explainers are implemented in the DALEX package for R. They are based on a uniform standardized grammar of model exploration which may be easily extended.

The Merging Path Plot: adaptive fusing of k-groups with likelihood-based model selection

2 code implementations13 Sep 2017 Agnieszka Sitko, Przemyslaw Biecek

In this article, we introduce The Merging Path Plot - a methodology, and factorMerger - an R package, for exploration and visualization of k-group dissimilarities.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.