no code implementations • 12 Apr 2022 • Esha Sarkar, Eduardo Chielle, Gamze Gursoy, Leo Chen, Mark Gerstein, Michail Maniatakos
Privacy concerns in outsourced ML, especially in the field of genetics, motivate the use of encrypted computation, like Homomorphic Encryption (HE).
no code implementations • 17 Mar 2022 • Yue Wang, Wenqing Li, Esha Sarkar, Muhammad Shafique, Michail Maniatakos, Saif Eddin Jabari
Based on our theoretical analysis and experimental results, we demonstrate the effectiveness of PiDAn in defending against backdoor attacks that use different settings of poisoned samples on GTSRB and ILSVRC2012 datasets.
no code implementations • 14 Aug 2021 • Esha Sarkar, Michail Maniatakos
Using a real-world cancer dataset, we analyze the dataset with the bias that already existed towards white individuals and also introduced biases in datasets artificially, and our experimental result show that TRAPDOOR can detect the presence of dataset bias with 100% accuracy, and furthermore can also extract the extent of bias by recovering the percentage with a small error.
no code implementations • 30 Dec 2020 • Munachiso Nwadike, Takumi Miyawaki, Esha Sarkar, Michail Maniatakos, Farah Shamout
Extensive evaluation of a state-of-the-art architecture demonstrates that by introducing images with few-pixel perturbations into the training set, an attacker can execute the backdoor successfully without having to be involved with the training procedure.
no code implementations • 20 Jun 2020 • Esha Sarkar, Hadjer Benkraouda, Michail Maniatakos
In this work, we demonstrate that specific changes to facial characteristics may also be used to trigger malicious behavior in an ML model.
no code implementations • 17 Mar 2020 • Yue Wang, Esha Sarkar, Wenqing Li, Michail Maniatakos, Saif Eddin Jabari
We develop a trigger design methodology that is based on well-established principles of traffic physics.