no code implementations • 10 Sep 2024 • Zitao Chen, Karthik Pattabiraman
MembershipTracker only requires the users to mark a small fraction of data (0. 005% to 0. 1% in proportion to the training set), and it enables the users to reliably detect the unauthorized use of their data (average 0% FPR@100% TPR).
1 code implementation • 2 Jul 2024 • Zitao Chen, Karthik Pattabiraman
Modern machine learning (ML) ecosystems offer a surging number of ML frameworks and code repositories that can greatly facilitate the development of ML models.
no code implementations • 5 Jun 2024 • Qutub Syed Sha, Michael Paulitsch, Karthik Pattabiraman, Korbinian Hagn, Fabian Oboril, Cornelius Buerkle, Kay-Ulrich Scholl, Gereon Hinz, Alois Knoll
As transformer-based object detection models progress, their impact in critical sectors like autonomous vehicles and aviation is expected to grow.
no code implementations • 30 Jan 2024 • Mohammed Elnawawy, Mohammadreza Hallajiyan, Gargi Mitra, Shahrear Iqbal, Karthik Pattabiraman
We show that the use of ML in medical systems, particularly connected systems that involve interfacing the ML engine with multiple peripheral devices, has security risks that might cause life-threatening damage to a patient's health in case of adversarial interventions.
no code implementations • 31 Oct 2023 • Florian Geissler, Syed Qutub, Michael Paulitsch, Karthik Pattabiraman
We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults.
1 code implementation • 4 Jul 2023 • Zitao Chen, Karthik Pattabiraman
Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine whether a given input is used for training the target model.
no code implementations • 16 Aug 2021 • Florian Geissler, Syed Qutub, Sayanta Roychowdhury, Ali Asgari, Yang Peng, Akash Dhamasia, Ralf Graefe, Karthik Pattabiraman, Michael Paulitsch
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications, including human robot interactions and automated driving.
1 code implementation • 11 Aug 2021 • Zitao Chen, Pritam Dash, Karthik Pattabiraman
Therefore, Jujutsu leverages generative adversarial networks (GAN) to perform localized attack recovery by synthesizing the semantic contents of the input that are corrupted by the attacks, and reconstructs a ``clean'' input for correct prediction.
no code implementations • 21 May 2021 • Aarti Kashyap, Syed Mubashir Iqbal, Karthik Pattabiraman, Margo Seltzer
These attacks, which we call Ripple False Data Injection Attacks (rfdia), use minimal input perturbations to stealthily change the dnn output.
1 code implementation • 3 Apr 2020 • Zitao Chen, Niranjhana Narayanan, Bo Fang, Guanpeng Li, Karthik Pattabiraman, Nathan DeBardeleben
TensorFI is a configurable FI tool that is flexible, easy to use, and portable.
1 code implementation • 30 Mar 2020 • Zitao Chen, Guanpeng Li, Karthik Pattabiraman
The adoption of deep neural networks (DNNs) in safety-critical domains has engendered serious reliability concerns.