2 code implementations • 13 Sep 2017 • Agnieszka Sitko, Przemyslaw Biecek
In this article, we introduce The Merging Path Plot - a methodology, and factorMerger - an R package, for exploration and visualization of k-group dissimilarities.
4 code implementations • 5 Apr 2018 • Mateusz Staniak, Przemyslaw Biecek
Complex models are commonly used in predictive modeling.
1 code implementation • 23 Jun 2018 • Przemyslaw Biecek
Presented explainers are implemented in the DALEX package for R. They are based on a uniform standardized grammar of model exploration which may be easily extended.
no code implementations • 11 Sep 2018 • Dominika Basaj, Barbara Rychalska, Przemyslaw Biecek, Anna Wroblewska
Datasets that boosted state-of-the-art solutions for Question Answering (QA) systems prove that it is possible to ask questions in natural language manner.
no code implementations • WS 2018 • Barbara Rychalska, Dominika Basaj, Przemyslaw Biecek, Anna Wroblewska
In this paper we present the results of an investigation of the importance of verbs in a deep learning QA system trained on SQuAD dataset.
4 code implementations • 19 Sep 2018 • Alicja Gosiewska, Przemyslaw Biecek
With modern software it is easy to train even a~complex model that fits the training data and results in high accuracy on the test set.
no code implementations • WS 2018 • Barbara Rychalska, Dominika Basaj, Anna Wr{\'o}blewska, Przemyslaw Biecek
Datasets that boosted state-of-the-art solutions for Question Answering (QA) systems prove that it is possible to ask questions in natural language manner.
1 code implementation • 5 Dec 2018 • Barbara Rychalska, Dominika Basaj, Przemyslaw Biecek
In addition, we have created and published a new dataset that may be used for validation of robustness of a Q&A model.
4 code implementations • 28 Feb 2019 • Alicja Gosiewska, Aleksandra Gacek, Piotr Lubon, Przemyslaw Biecek
Complex black-box predictive models may have high accuracy, but opacity causes problems like lack of trust, lack of stability, sensitivity to concept drift.
1 code implementation • 27 Mar 2019 • Mateusz Staniak, Przemyslaw Biecek
The increasing availability of large but noisy data sets with a large number of heterogeneous variables leads to the increasing interest in the automation of common tasks for data analysis.
2 code implementations • 27 Mar 2019 • Alicja Gosiewska, Przemyslaw Biecek
Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.
1 code implementation • 9 Jul 2019 • Przemyslaw Biecek
Predictive modeling has an increasing number of applications in various fields.
2 code implementations • 24 Aug 2019 • Alicja Gosiewska, Mateusz Bakala, Katarzyna Woznica, Maciej Zwolinski, Przemyslaw Biecek
Second is, that for k-fold cross-validation, the model performance is in most cases calculated as an average performance from particular folds, which neglects the information how stable is the performance for different folds.
1 code implementation • 6 Oct 2019 • Tomasz Stanislawek, Anna Wróblewska, Alicja Wójcicka, Daniel Ziembicki, Przemyslaw Biecek
A new enriched semantic annotation of errors for this data set and new diagnostic data sets are attached in the supplementary materials.
no code implementations • CONLL 2019 • Tomasz Stanislawek, Anna Wr{\'o}blewska, Alicja W{\'o}jcicka, Daniel Ziembicki, Przemyslaw Biecek
Recent developments in Named Entity Recognition (NER) have resulted in better and better models.
1 code implementation • 11 Feb 2020 • Alicja Gosiewska, Przemyslaw Biecek
Can we train interpretable and accurate models, without timeless feature engineering?
1 code implementation • 1 May 2020 • Hubert Baniecki, Dariusz Parzych, Przemyslaw Biecek
We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model increases the performance and confidence of human decision making.
1 code implementation • 24 Sep 2020 • Szymon Maksymiuk, Alicja Gosiewska, Przemyslaw Biecek
The growing availability of data and computing power fuels the development of predictive models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 28 Sep 2020 • Michael Bücker, Gero Szepannek, Alicja Gosiewska, Przemyslaw Biecek
This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable.
1 code implementation • 28 Dec 2020 • Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek
The increasing amount of available data, computing power, and the constant pursuit for higher performance results in the growing complexity of predictive models.
1 code implementation • 7 Apr 2021 • Katarzyna Pekala, Katarzyna Woznica, Przemyslaw Biecek
In this work, we propose new methods to support model analysis by exploiting the information about the correlation between variables.
Explainable artificial intelligence Interpretable Machine Learning
1 code implementation • 26 May 2021 • Hubert Baniecki, Wojciech Kretowicz, Przemyslaw Biecek
We believe this to be the first work using a genetic algorithm for manipulating explanations, which is transferable as it generalizes both ways: in a model-agnostic and an explanation-agnostic manner.
1 code implementation • 26 Aug 2021 • Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger
To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.
no code implementations • 11 May 2022 • Nicholas Spyrison, Dianne Cook, Przemyslaw Biecek
To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections and use the radial tour.
Explainable Artificial Intelligence (XAI) Feature Importance
no code implementations • 21 Aug 2022 • Przemyslaw Biecek
With the help of the POCA method, preliminary requirements can be defined for the model-building process, so that costly model misspecification errors can be identified as soon as possible or even avoided.
1 code implementation • 26 Feb 2023 • Przemyslaw Biecek, Hubert Baniecki, Mateusz Krzyzinski, Dianne Cook
The usual goal of supervised learning is to find the best model, the one that optimizes a particular performance measure.
1 code implementation • 12 Mar 2023 • Mikolaj Spytek, Weronika Hryniewska-Guzik, Jaroslaw Zygierewicz, Jacek Rogala, Przemyslaw Biecek
The prediction of age is a challenging task with various practical applications in high-impact fields like the healthcare domain or criminology.
3 code implementations • 17 Mar 2023 • Hubert Baniecki, Bartlomiej Sobieski, Patryk Szatkowski, Przemyslaw Bombinski, Przemyslaw Biecek
Time-to-event prediction, e. g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications.
1 code implementation • 6 Jun 2023 • Hubert Baniecki, Przemyslaw Biecek
Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions.
1 code implementation • 15 Jul 2023 • Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek
Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.
1 code implementation • 29 Aug 2023 • Mustafa Cavus, Adrian Stando, Przemyslaw Biecek
This paper introduces the glocal explanations (between local and global levels) of the expected goal models to enable performance analysis at the team and player levels by proposing the use of aggregated versions of the SHAP values and partial dependence profiles.
1 code implementation • 8 Nov 2023 • Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek
Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.
no code implementations • 21 Feb 2024 • Przemyslaw Biecek, Wojciech Samek
Explainable Artificial Intelligence (XAI) is a young but very promising field of research.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 12 Mar 2024 • Vladimir Zaigrajew, Hubert Baniecki, Lukasz Tulczyjew, Agata M. Wijata, Jakub Nalepa, Nicolas Longépé, Przemyslaw Biecek
Remote sensing (RS) applications in the space domain demand machine learning (ML) models that are reliable, robust, and quality-assured, making red teaming a vital approach for identifying and exposing potential flaws and biases.
1 code implementation • 2 Apr 2024 • Krzysztof Jankowski, Bartlomiej Sobieski, Mateusz Kwiatkowski, Jakub Szulc, Michal Janik, Hubert Baniecki, Przemyslaw Biecek
Foundation models have emerged as pivotal tools, tackling many complex tasks through pre-training on vast datasets and subsequent fine-tuning for specific applications.