Search Results for author: Gjergji Kasneci

Found 48 papers, 20 papers with code

Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers

no code implementations22 May 2024 Tobias Leemann, Alina Fastowski, Felix Pfeiffer, Gjergji Kasneci

We address the critical challenge of applying feature attribution methods to the transformer architecture, which dominates current applications in natural language processing and beyond.

Additive models Feature Importance

Towards Non-Adversarial Algorithmic Recourse

no code implementations15 Mar 2024 Tobias Leemann, Martin Pawelczyk, Bardh Prenkaj, Gjergji Kasneci

We subsequently investigate how different components in the objective functions, e. g., the machine learning model or cost function used to measure distance, determine whether the outcome can be considered an adversarial example or not.

counterfactual Counterfactual Explanation

Is Crowdsourcing Breaking Your Bank? Cost-Effective Fine-Tuning of Pre-trained Language Models with Proximal Policy Optimization

no code implementations28 Feb 2024 Shuo Yang, Gjergji Kasneci

This research significantly reduces training costs of proximal policy-guided models and demonstrates the potential for self-correction of language models.

Language Modelling

Adversarial Reweighting Guided by Wasserstein Distance for Bias Mitigation

no code implementations21 Nov 2023 Xuan Zhao, Simone Fabbrizzi, Paula Reyero Lobo, Siamak Ghodsi, Klaus Broelemann, Steffen Staab, Gjergji Kasneci

To balance the data distribution between the majority and the minority groups, our approach deemphasizes samples from the majority group.


Causal Fairness-Guided Dataset Reweighting using Neural Networks

no code implementations17 Nov 2023 Xuan Zhao, Klaus Broelemann, Salvatore Ruggieri, Gjergji Kasneci

The two neural networks can approximate the causal model of the data, and the causal model of interventions.


Counterfactual Explanation for Regression via Disentanglement in Latent Space

no code implementations14 Nov 2023 Xuan Zhao, Klaus Broelemann, Gjergji Kasneci

In this paper, we introduce a novel method to generate CEs for a pre-trained regressor by first disentangling the label-relevant from the label-irrelevant dimensions in the latent space.

counterfactual Counterfactual Explanation +2

Counterfactual Explanation via Search in Gaussian Mixture Distributed Latent Space

no code implementations25 Jul 2023 Xuan Zhao, Klaus Broelemann, Gjergji Kasneci

In this paper, we introduce a new method to generate CEs for a pre-trained binary classifier by first shaping the latent space of an autoencoder to be a mixture of Gaussian distributions.

counterfactual Counterfactual Explanation

Gaussian Membership Inference Privacy

1 code implementation NeurIPS 2023 Tobias Leemann, Martin Pawelczyk, Gjergji Kasneci

In particular, we derive a parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD).

Inference Attack Membership Inference Attack

Explanation Shift: How Did the Distribution Shift Impact the Model?

no code implementations14 Mar 2023 Carlos Mougan, Klaus Broelemann, David Masip, Gjergji Kasneci, Thanassis Thiropanis, Steffen Staab

Then, state-of-the-art techniques model input data distributions or model prediction distributions and try to understand issues regarding the interactions between learned models and shifting distributions.

Relational Local Explanations

no code implementations23 Dec 2022 Vadim Borisov, Gjergji Kasneci

The majority of existing post-hoc explanation approaches for machine learning models produce independent, per-variable feature attribution scores, ignoring a critical inherent characteristics of homogeneously structured data, such as visual or text data: there exist latent inter-variable relationships between features.

Decomposing Counterfactual Explanations for Consequential Decision Making

no code implementations3 Nov 2022 Martin Pawelczyk, Lea Tiyavorabun, Gjergji Kasneci

In this work, we develop \texttt{DEAR} (DisEntangling Algorithmic Recourse), a novel and practical recourse framework that bridges the gap between the IMF and the strong causal assumptions.

counterfactual Decision Making

I Prefer not to Say: Protecting User Consent in Models with Optional Personal Data

1 code implementation25 Oct 2022 Tobias Leemann, Martin Pawelczyk, Christian Thomas Eberle, Gjergji Kasneci

In this work, we show that the decision not to share data can be considered as information in itself that should be protected to respect users' privacy.

Data Augmentation Decision Making +1

Explanation Shift: Detecting distribution shifts on tabular data via the explanation space

no code implementations22 Oct 2022 Carlos Mougan, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, Steffen Staab

We provide a mathematical analysis of different types of distribution shifts as well as synthetic experimental examples.

Change Detection for Local Explainability in Evolving Data Streams

1 code implementation6 Sep 2022 Johannes Haug, Alexander Braun, Stefan Zürn, Gjergji Kasneci

In particular, we show that local attributions can become obsolete each time the predictive model is updated or concept drift alters the data generating distribution.

Change Detection

On the Trade-Off between Actionable Explanations and the Right to be Forgotten

no code implementations30 Aug 2022 Martin Pawelczyk, Tobias Leemann, Asia Biega, Gjergji Kasneci

Thus, our work raises fundamental questions about the compatibility of "the right to an actionable explanation" in the context of the "right to be forgotten", while also providing constructive insights on the determining factors of recourse robustness.

BoxShrink: From Bounding Boxes to Segmentation Masks

1 code implementation5 Aug 2022 Michael Gröger, Vadim Borisov, Gjergji Kasneci

One of the core challenges facing the medical image computing community is fast and efficient data sample labeling.


When are Post-hoc Conceptual Explanations Identifiable?

1 code implementation28 Jun 2022 Tobias Leemann, Michael Kirchhof, Yao Rong, Enkelejda Kasneci, Gjergji Kasneci

Interest in understanding and factorizing learned embedding spaces through conceptual explanations is steadily growing.


Standardized Evaluation of Machine Learning Methods for Evolving Data Streams

1 code implementation28 Apr 2022 Johannes Haug, Effi Tramountani, Gjergji Kasneci

In this sense, we hope that our work will contribute to more standardized, reliable and realistic testing and comparison of online machine learning methods.

BIG-bench Machine Learning feature selection

Dynamic Model Tree for Interpretable Data Stream Learning

1 code implementation30 Mar 2022 Johannes Haug, Klaus Broelemann, Gjergji Kasneci

Dynamic Model Trees are thus a powerful online learning framework that contributes to more lightweight and interpretable machine learning in data streams.

BIG-bench Machine Learning Interpretable Machine Learning

Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

3 code implementations13 Mar 2022 Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, Himabindu Lakkaraju

To this end, we propose a novel objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates, minimizes recourse costs, and also ensures that the resulting recourse achieves a positive model prediction.

Gaussian Graphical Models as an Ensemble Method for Distributed Gaussian Processes

no code implementations7 Feb 2022 Hamed Jalali, Gjergji Kasneci

Distributed Gaussian process (DGP) is a popular approach to scale GP to big data which divides the training data into some subsets, performs local inference for each partition, and aggregates the results to acquire global prediction.

Gaussian Processes

A Consistent and Efficient Evaluation Strategy for Attribution Methods

1 code implementation1 Feb 2022 Yao Rong, Tobias Leemann, Vadim Borisov, Gjergji Kasneci, Enkelejda Kasneci

With a variety of local feature attribution methods being proposed in recent years, follow-up work suggested several evaluation strategies.

A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines

1 code implementation14 Nov 2021 Vadim Borisov, Johannes Meier, Johan van den Heuvel, Hamed Jalali, Gjergji Kasneci

Understanding the results of deep neural networks is an essential step towards wider acceptance of deep learning algorithms.

Deep Neural Networks and Tabular Data: A Survey

2 code implementations5 Oct 2021 Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, Gjergji Kasneci

Moreover, we discuss deep learning approaches for generating tabular data, and we also provide an overview over strategies for explaining deep models on tabular data.

CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

4 code implementations2 Aug 2021 Martin Pawelczyk, Sascha Bielawski, Johannes van den Heuvel, Tobias Richter, Gjergji Kasneci

In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods.

Benchmarking counterfactual +1

Gaussian Experts Selection using Graphical Models

no code implementations2 Feb 2021 Hamed Jalali, Martin Pawelczyk, Gjergji Kasneci

Imposing the \emph{conditional independence assumption} (CI) between the experts renders the aggregation of different expert predictions time efficient at the cost of poor uncertainty quantification.

Gaussian Processes Uncertainty Quantification

On Baselines for Local Feature Attributions

1 code implementation4 Jan 2021 Johannes Haug, Stefan Zürn, Peter El-Jiz, Gjergji Kasneci

Our experimental study illustrates the sensitivity of popular attribution models to the baseline, thus laying the foundation for a more in-depth discussion on sensible baseline methods for tabular data.


Learning Parameter Distributions to Detect Concept Drift in Data Streams

2 code implementations19 Oct 2020 Johannes Haug, Gjergji Kasneci

By treating the parameters of a predictive model as random variables, we show that concept drift corresponds to a change in the distribution of optimal parameters.

Aggregating Dependent Gaussian Experts in Local Approximation

no code implementations17 Oct 2020 Hamed Jalali, Gjergji Kasneci

The precision matrix encodes conditional dependencies between experts and is used to detect strongly dependent experts and construct an improved aggregation.

Gaussian Processes

On Counterfactual Explanations under Predictive Multiplicity

no code implementations23 Jun 2020 Martin Pawelczyk, Klaus Broelemann, Gjergji Kasneci

In this work, we derive a general upper bound for the costs of counterfactual explanations under predictive multiplicity.


Learning Model-Agnostic Counterfactual Explanations for Tabular Data

3 code implementations21 Oct 2019 Martin Pawelczyk, Johannes Haug, Klaus Broelemann, Gjergji Kasneci

On one hand, we suggest to complement the catalogue of counterfactual quality measures [1] using a criterion to quantify the degree of difficulty for a certain counterfactual suggestion.


Training Decision Trees as Replacement for Convolution Layers

no code implementations24 May 2019 Wolfgang Fuhl, Gjergji Kasneci, Wolfgang Rosenstiel, Enkelejda Kasneci

Our approach reduces the complexity of convolutions by replacing it with binary decisions.

A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

no code implementations25 Sep 2018 Klaus Broelemann, Gjergji Kasneci

We propose shallow model trees as a way to combine simple and highly transparent predictive models for higher predictive power without losing the transparency of the original models.

Combining Restricted Boltzmann Machines with Neural Networks for Latent Truth Discovery

no code implementations27 Jul 2018 Klaus Broelemann, Gjergji Kasneci

Latent truth discovery, LTD for short, refers to the problem of aggregating ltiple claims from various sources in order to estimate the plausibility of atements about entities.

Restricted Boltzmann Machines for Robust and Fast Latent Truth Discovery

no code implementations31 Dec 2017 Klaus Broelemann, Thomas Gottron, Gjergji Kasneci

Despite a multitude of algorithms to address the LTD problem that can be found in literature, only little is known about their overall performance with respect to effectiveness (in terms of truth discovery capabilities), efficiency and robustness.

PupilNet: Convolutional Neural Networks for Robust Pupil Detection

no code implementations19 Jan 2016 Wolfgang Fuhl, Thiago Santini, Gjergji Kasneci, Enkelejda Kasneci

Real-time, accurate, and robust pupil detection is an essential prerequisite for pervasive video-based eye-tracking.

Position Pupil Detection

SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases

1 code implementation19 Jul 2012 Simon Lacoste-Julien, Konstantina Palla, Alex Davies, Gjergji Kasneci, Thore Graepel, Zoubin Ghahramani

The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information.

Cannot find the paper you are looking for? You can Submit a new open access paper.