no code implementations • 5 Oct 2024 • Zi Wang, Divyam Anshumaan, Ashish Hooda, Yudong Chen, Somesh Jha
Optimization methods are widely employed in deep learning to identify and mitigate undesired model responses.
no code implementations • 27 Aug 2024 • Ashish Hooda, Rishabh Khandelwal, Prasad Chalasani, Kassem Fawaz, Somesh Jha
PolicyLR converts privacy policies into a machine-readable format using valuations of atomic formulae, allowing for formal definitions of tasks like compliance and consistency.
no code implementations • 18 Jul 2024 • Guruprasad V Ramesh, Harrison Rosenberg, Ashish Hooda, Shimaa Ahmed Kassem Fawaz
Computer vision systems have been deployed in various applications involving biometrics like human faces.
no code implementations • 24 Feb 2024 • Neal Mangaokar, Ashish Hooda, Jihye Choi, Shreyas Chandrashekaran, Kassem Fawaz, Somesh Jha, Atul Prakash
More recent LLMs often incorporate an additional layer of defense, a Guard Model, which is a second LLM that is designed to check and moderate the output response of the primary LLM.
no code implementations • 8 Feb 2024 • Ashish Hooda, Mihai Christodorescu, Miltiadis Allamanis, Aaron Wilson, Kassem Fawaz, Somesh Jha
Large Language Models' success on text generation has also made them better at code generation and coding tasks.
no code implementations • 30 Jul 2023 • Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash
This work aims to address this gap by offering a theoretical characterization of the trade-off between detection and false positive rates for stateful defenses.
1 code implementation • 11 Mar 2023 • Ryan Feng, Ashish Hooda, Neal Mangaokar, Kassem Fawaz, Somesh Jha, Atul Prakash
Such stateful defenses aim to defend against black-box attacks by tracking the query history and detecting and rejecting queries that are "similar" and thus preventing black-box attacks from finding useful gradients and making progress towards finding adversarial attacks within a reasonable query budget.
no code implementations • 16 Dec 2022 • Ashish Hooda, Matthew Wallace, Kushal Jhunjhunwalla, Earlence Fernandes, Kassem Fawaz
Our key insight is that we can interpret a user's intentions by analyzing their activity on counterpart systems of the web and smartphones.
no code implementations • 8 Dec 2022 • Ashish Hooda, Andrey Labunets, Tadayoshi Kohno, Earlence Fernandes
Content scanning systems employ perceptual hashing algorithms to scan user content for illegal material, such as child pornography or terrorist recruitment flyers.
no code implementations • 11 Feb 2022 • Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash
D4 uses an ensemble of models over disjoint subsets of the frequency spectrum to significantly improve adversarial robustness.
2 code implementations • CVPR 2021 • Athena Sayles, Ashish Hooda, Mohit Gupta, Rahul Chatterjee, Earlence Fernandes
By contrast, we contribute a procedure to generate, for the first time, physical adversarial examples that are invisible to human eyes.