Search Results for author: Alicja Gosiewska

Found 9 papers, 8 papers with code

Enabling Machine Learning Algorithms for Credit Scoring -- Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models

no code implementations14 Apr 2021 Przemysław Biecek, Marcin Chlebus, Janusz Gajda, Alicja Gosiewska, Anna Kozak, Dominik Ogonowski, Jakub Sztachelski, Piotr Wojewnik

What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, 'black box' models like random forests, gradient boosted or extreme gradient boosted trees.

Explainable artificial intelligence regression

Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring

1 code implementation28 Sep 2020 Michael Bücker, Gero Szepannek, Alicja Gosiewska, Przemyslaw Biecek

This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable.

BIG-bench Machine Learning

Landscape of R packages for eXplainable Artificial Intelligence

1 code implementation24 Sep 2020 Szymon Maksymiuk, Alicja Gosiewska, Przemyslaw Biecek

The growing availability of data and computing power fuels the development of predictive models.

Explainable artificial intelligence

Interpretable Meta-Measure for Model Performance

3 code implementations2 Jun 2020 Alicja Gosiewska, Katarzyna Woźnica, Przemysław Biecek

For example, the difference in performance for two models has no probabilistic interpretation, there is no reference point to indicate whether they represent a significant improvement, and it makes no sense to compare such differences between data sets.


EPP: interpretable score of model predictive power

2 code implementations24 Aug 2019 Alicja Gosiewska, Mateusz Bakala, Katarzyna Woznica, Maciej Zwolinski, Przemyslaw Biecek

Second is, that for k-fold cross-validation, the model performance is in most cases calculated as an average performance from particular folds, which neglects the information how stable is the performance for different folds.

General Classification Model Selection

Do Not Trust Additive Explanations

2 code implementations27 Mar 2019 Alicja Gosiewska, Przemyslaw Biecek

Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.

Additive models BIG-bench Machine Learning +1

SAFE ML: Surrogate Assisted Feature Extraction for Model Learning

4 code implementations28 Feb 2019 Alicja Gosiewska, Aleksandra Gacek, Piotr Lubon, Przemyslaw Biecek

Complex black-box predictive models may have high accuracy, but opacity causes problems like lack of trust, lack of stability, sensitivity to concept drift.

AutoML Feature Engineering

auditor: an R Package for Model-Agnostic Visual Validation and Diagnostics

4 code implementations19 Sep 2018 Alicja Gosiewska, Przemyslaw Biecek

With modern software it is easy to train even a~complex model that fits the training data and results in high accuracy on the test set.

Cannot find the paper you are looking for? You can Submit a new open access paper.