Search Results for author: Gabriele Tolomei

Found 17 papers, 5 papers with code

Debiasing Machine Unlearning with Counterfactual Examples

no code implementations24 Apr 2024 Ziheng Chen, Jia Wang, Jun Zhuang, Abbavaram Gowtham Reddy, Fabrizio Silvestri, Jin Huang, Kaushiki Nag, Kun Kuang, Xin Ning, Gabriele Tolomei

This bias emerges from two main sources: (1) data-level bias, characterized by uneven data removal, and (2) algorithm-level bias, which leads to the contamination of the remaining dataset, thereby degrading model accuracy.

$\nabla τ$: Gradient-based and Task-Agnostic machine Unlearning

no code implementations21 Mar 2024 Daniel Trippa, Cesare Campagnano, Maria Sofia Bucarelli, Gabriele Tolomei, Fabrizio Silvestri

In this study, we introduce Gradient-based and Task-Agnostic machine Unlearning ($\nabla \tau$), an optimization framework designed to remove the influence of a subset of training data efficiently.

Inference Attack Machine Unlearning +1

Community Membership Hiding as Counterfactual Graph Search via Deep Reinforcement Learning

no code implementations13 Oct 2023 Andrea Bernini, Fabrizio Silvestri, Gabriele Tolomei

Community detection techniques are useful tools for social media platforms to discover tightly connected groups of users who share common interests.

Community Detection counterfactual +1

A Survey on Decentralized Federated Learning

no code implementations8 Aug 2023 Edoardo Gabrielli, Giovanni Pica, Gabriele Tolomei

In contrast to standard ML, where data must be collected at the exact location where training is performed, FL takes advantage of the computational capabilities of millions of edge devices to collaboratively train a shared, global model without disclosing their local private data.

Federated Learning Privacy Preserving

The Dark Side of Explanations: Poisoning Recommender Systems with Counterfactual Examples

no code implementations30 Apr 2023 Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Gabriele Tolomei

By reversing the learning process of the recommendation model, we thus develop a proficient greedy algorithm to generate fabricated user profiles and their associated interaction records for the aforementioned surrogate model.

counterfactual Counterfactual Explanation +4

MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction

no code implementations3 Nov 2022 Gabriele Tolomei, Lorenzo Takanen, Fabio Pinelli

In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies.

Time Series Time Series Forecasting

Sparse Vicious Attacks on Graph Neural Networks

1 code implementation20 Sep 2022 Giovanni Trappolini, Valentino Maiorca, Silvio Severino, Emanuele Rodolà, Fabrizio Silvestri, Gabriele Tolomei

In this work, we focus on a specific, white-box attack to GNN-based link prediction models, where a malicious node aims to appear in the list of recommended nodes for a given target victim.

Link Prediction Recommendation Systems

GREASE: Generate Factual and Counterfactual Explanations for GNN-based Recommendations

no code implementations4 Aug 2022 Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Zhenhua Huang, Hongshik Ahn, Gabriele Tolomei

Although powerful, it is very difficult for a GNN-based recommender system to attach tangible explanations of why a specific item ends up in the list of suggestions for a given user.

counterfactual Graph Classification +1

ReLAX: Reinforcement Learning Agent eXplainer for Arbitrary Predictive Models

1 code implementation22 Oct 2021 Ziheng Chen, Fabrizio Silvestri, Jia Wang, He Zhu, Hongshik Ahn, Gabriele Tolomei

However, existing CF generation methods either exploit the internals of specific models or depend on each sample's neighborhood, thus they are hard to generalize for complex models and inefficient for large datasets.

counterfactual Decision Making +2

Turning Federated Learning Systems Into Covert Channels

no code implementations21 Apr 2021 Gabriele Costa, Fabio Pinelli, Simone Soderi, Gabriele Tolomei

Although the effect of the model poisoning is negligible to other participants, and does not alter the overall model performance, it can be observed by a malicious receiver and used to transmit a single bit.

Federated Learning Model Poisoning

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

1 code implementation5 Feb 2021 Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke, Fabrizio Silvestri

In this work, we propose a method for generating CF explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes.

counterfactual

Treant: Training Evasion-Aware Decision Trees

1 code implementation2 Jul 2019 Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei, Seyum Assefa Abebe, Salvatore Orlando

Despite its success and popularity, machine learning is now recognized as vulnerable to evasion attacks, i. e., carefully crafted perturbations of test inputs designed to force prediction errors.

BIG-bench Machine Learning

Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking

3 code implementations20 Jun 2017 Gabriele Tolomei, Fabrizio Silvestri, Andrew Haines, Mounia Lalmas

There are many circumstances however where it is important to understand (i) why a model outputs a certain prediction on a given instance, (ii) which adjustable features of that instance should be modified, and finally (iii) how to alter such a prediction when the mutated instance is input back to the model.

Feature Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.