1 code implementation • EMNLP (PrivateNLP) 2020 • Rishabh Khandelwal, Asmit Nayak, Yao Yao, Kassem Fawaz
Online services utilize privacy settings to provide users with control over their data.
no code implementations • 5 Mar 2025 • Jack West, Bengisu Cagiltay, Shirley Zhang, Jingjie Li, Kassem Fawaz, Suman Banerjee
Machine learning models deployed locally on social media applications are used for features, such as face filters which read faces in-real time, and they expose sensitive attributes to the apps.
no code implementations • 11 Nov 2024 • Asmit Nayak, Shirley Zhang, Yash Wani, Rishabh Khandelwal, Kassem Fawaz
Deceptive patterns (DPs) in digital interfaces manipulate users into making unintended decisions, exploiting cognitive biases and psychological vulnerabilities.
no code implementations • 8 Oct 2024 • Yucheng Yang, Jingjie Li, Kassem Fawaz
Pedestrian heading tracking enables applications in pedestrian navigation, traffic safety, and accessibility.
no code implementations • 27 Aug 2024 • Ashish Hooda, Rishabh Khandelwal, Prasad Chalasani, Kassem Fawaz, Somesh Jha
PolicyLR converts privacy policies into a machine-readable format using valuations of atomic formulae, allowing for formal definitions of tasks like compliance and consistency.
no code implementations • 27 Mar 2024 • Jack West, Lea Thiemt, Shimaa Ahmed, Maggie Bartig, Kassem Fawaz, Suman Banerjee
Capitalizing on this new processing model of locally analyzing user images, we analyze two popular social media apps, TikTok and Instagram, to reveal (1) what insights vision models in both apps infer about users from their image and video data and (2) whether these models exhibit performance disparities with respect to demographics.
no code implementations • 24 Feb 2024 • Neal Mangaokar, Ashish Hooda, Jihye Choi, Shreyas Chandrashekaran, Kassem Fawaz, Somesh Jha, Atul Prakash
More recent LLMs often incorporate an additional layer of defense, a Guard Model, which is a second LLM that is designed to check and moderate the output response of the primary LLM.
no code implementations • 8 Feb 2024 • Ashish Hooda, Mihai Christodorescu, Miltiadis Allamanis, Aaron Wilson, Kassem Fawaz, Somesh Jha
Large Language Models' success on text generation has also made them better at code generation and coding tasks.
no code implementations • 30 Sep 2023 • David Khachaturov, Yue Gao, Ilia Shumailov, Robert Mullins, Ross Anderson, Kassem Fawaz
Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world.
1 code implementation • 13 Sep 2023 • Harrison Rosenberg, Shimaa Ahmed, Guruprasad V Ramesh, Ramya Korlakai Vinayak, Kassem Fawaz
In particular, their ability to synthesize and modify human faces has spurred research into using generated face images in both training data augmentation and model performance assessments.
no code implementations • 23 Aug 2023 • Yue Gao, Ilia Shumailov, Kassem Fawaz
In response, this paper introduces SEA, a novel ML security system to characterize black-box attacks on ML systems for forensic purposes and to facilitate human-explainable intelligence sharing.
no code implementations • 30 Jul 2023 • Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash
This work aims to address this gap by offering a theoretical characterization of the trade-off between detection and false positive rates for stateful defenses.
1 code implementation • 11 Mar 2023 • Ryan Feng, Ashish Hooda, Neal Mangaokar, Kassem Fawaz, Somesh Jha, Atul Prakash
Such stateful defenses aim to defend against black-box attacks by tracking the query history and detecting and rejecting queries that are "similar" and thus preventing black-box attacks from finding useful gradients and making progress towards finding adversarial attacks within a reasonable query budget.
no code implementations • 16 Dec 2022 • Ashish Hooda, Matthew Wallace, Kushal Jhunjhunwalla, Earlence Fernandes, Kassem Fawaz
Our key insight is that we can interpret a user's intentions by analyzing their activity on counterpart systems of the web and smartphones.
1 code implementation • 19 Jun 2022 • Yue Gao, Ilia Shumailov, Kassem Fawaz, Nicolas Papernot
An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model.
no code implementations • 11 Feb 2022 • Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash
D4 uses an ensemble of models over disjoint subsets of the frequency spectrum to significantly improve adversarial robustness.
no code implementations • 9 Feb 2022 • Harrison Rosenberg, Robi Bhattacharjee, Kassem Fawaz, Somesh Jha
Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.
no code implementations • 6 Feb 2022 • Shimaa Ahmed, Yash Wani, Ali Shahin Shamsabadi, Mohammad Yaghini, Ilia Shumailov, Nicolas Papernot, Kassem Fawaz
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning.
1 code implementation • 5 Aug 2021 • Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha
We answer this question with an analytical and empirical exploration of recent face obfuscation systems.
1 code implementation • 18 Apr 2021 • Yue Gao, Ilia Shumailov, Kassem Fawaz
As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm.
1 code implementation • 19 Mar 2020 • Chuhan Gao, Varun Chandrasekaran, Kassem Fawaz, Somesh Jha
We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++.
Cryptography and Security
no code implementations • 3 Mar 2020 • Yue Gao, Harrison Rosenberg, Kassem Fawaz, Somesh Jha, Justin Hsu
In test-time attacks an adversary crafts adversarial examples, which are specially crafted perturbations imperceptible to humans which, when added to an input example, force a machine learning model to misclassify the given input example.
no code implementations • 26 May 2019 • Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu
and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off?
1 code implementation • 22 Sep 2018 • Thomas Linden, Rishabh Khandelwal, Hamza Harkous, Kassem Fawaz
In this analysis, we find evidence for positive changes triggered by the GDPR, with the specificity level improving on average.
2 code implementations • 7 Feb 2018 • Hamza Harkous, Kassem Fawaz, Rémi Lebret, Florian Schaub, Kang G. Shin, Karl Aberer
Companies, users, researchers, and regulators still lack usable and scalable tools to cope with the breadth and depth of privacy policies.