no code implementations • 24 Apr 2024 • Ziheng Chen, Jia Wang, Jun Zhuang, Abbavaram Gowtham Reddy, Fabrizio Silvestri, Jin Huang, Kaushiki Nag, Kun Kuang, Xin Ning, Gabriele Tolomei
This bias emerges from two main sources: (1) data-level bias, characterized by uneven data removal, and (2) algorithm-level bias, which leads to the contamination of the remaining dataset, thereby degrading model accuracy.
no code implementations • 21 Mar 2024 • Daniel Trippa, Cesare Campagnano, Maria Sofia Bucarelli, Gabriele Tolomei, Fabrizio Silvestri
In this study, we introduce Gradient-based and Task-Agnostic machine Unlearning ($\nabla \tau$), an optimization framework designed to remove the influence of a subset of training data efficiently.
no code implementations • 13 Oct 2023 • Andrea Bernini, Fabrizio Silvestri, Gabriele Tolomei
Community detection techniques are useful tools for social media platforms to discover tightly connected groups of users who share common interests.
no code implementations • 7 Oct 2023 • Gabriele Tolomei, Cesare Campagnano, Fabrizio Silvestri, Giovanni Trappolini
In this paper, we present a groundbreaking paradigm for human-computer interaction that revolutionizes the traditional notion of an operating system.
no code implementations • 8 Aug 2023 • Edoardo Gabrielli, Giovanni Pica, Gabriele Tolomei
In contrast to standard ML, where data must be collected at the exact location where training is performed, FL takes advantage of the computational capabilities of millions of edge devices to collaboratively train a shared, global model without disclosing their local private data.
no code implementations • 30 Apr 2023 • Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Gabriele Tolomei
By reversing the learning process of the recommendation model, we thus develop a proficient greedy algorithm to generate fabricated user profiles and their associated interaction records for the aforementioned surrogate model.
no code implementations • 29 Mar 2023 • Gabriele Tolomei, Edoardo Gabrielli, Dimitri Belli, Vittorio Miori
In this work, we propose FLANDERS, a novel federated learning (FL) aggregation scheme robust to Byzantine attacks.
no code implementations • 3 Nov 2022 • Gabriele Tolomei, Lorenzo Takanen, Fabio Pinelli
In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies.
1 code implementation • 20 Sep 2022 • Giovanni Trappolini, Valentino Maiorca, Silvio Severino, Emanuele Rodolà, Fabrizio Silvestri, Gabriele Tolomei
In this work, we focus on a specific, white-box attack to GNN-based link prediction models, where a malicious node aims to appear in the list of recommended nodes for a given target victim.
no code implementations • 4 Aug 2022 • Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Zhenhua Huang, Hongshik Ahn, Gabriele Tolomei
Although powerful, it is very difficult for a GNN-based recommender system to attach tangible explanations of why a specific item ends up in the list of suggestions for a given user.
1 code implementation • 22 Oct 2021 • Ziheng Chen, Fabrizio Silvestri, Jia Wang, He Zhu, Hongshik Ahn, Gabriele Tolomei
However, existing CF generation methods either exploit the internals of specific models or depend on each sample's neighborhood, thus they are hard to generalize for complex models and inefficient for large datasets.
no code implementations • 5 Oct 2021 • Federico Siciliano, Maria Sofia Bucarelli, Gabriele Tolomei, Fabrizio Silvestri
In this work, we formulate NEWRON: a generalization of the McCulloch-Pitts neuron structure.
no code implementations • 21 Apr 2021 • Gabriele Costa, Fabio Pinelli, Simone Soderi, Gabriele Tolomei
Although the effect of the model poisoning is negligible to other participants, and does not alter the overall model performance, it can be observed by a malicious receiver and used to transmit a single bit.
1 code implementation • 5 Feb 2021 • Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke, Fabrizio Silvestri
In this work, we propose a method for generating CF explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes.
1 code implementation • 2 Jul 2019 • Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei, Seyum Assefa Abebe, Salvatore Orlando
Despite its success and popularity, machine learning is now recognized as vulnerable to evasion attacks, i. e., carefully crafted perturbations of test inputs designed to force prediction errors.
no code implementations • 3 Apr 2018 • Gabriele Tolomei, Mounia Lalmas, Ayman Farahat, Andrew Haines
We then estimate dwell time thresholds of accidental clicks from that component.
3 code implementations • 20 Jun 2017 • Gabriele Tolomei, Fabrizio Silvestri, Andrew Haines, Mounia Lalmas
There are many circumstances however where it is important to understand (i) why a model outputs a certain prediction on a given instance, (ii) which adjustable features of that instance should be modified, and finally (iii) how to alter such a prediction when the mutated instance is input back to the model.