1 code implementation • 4 Oct 2023 • Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes
Large Language Models (LLMs) are being enhanced with the ability to use tools and to process multiple modalities.
no code implementations • 16 Dec 2022 • Ashish Hooda, Matthew Wallace, Kushal Jhunjhunwalla, Earlence Fernandes, Kassem Fawaz
Our key insight is that we can interpret a user's intentions by analyzing their activity on counterpart systems of the web and smartphones.
no code implementations • 8 Dec 2022 • Ashish Hooda, Andrey Labunets, Tadayoshi Kohno, Earlence Fernandes
Content scanning systems employ perceptual hashing algorithms to scan user content for illegal material, such as child pornography or terrorist recruitment flyers.
1 code implementation • 14 Feb 2021 • Thomas Kobber Panum, Zi Wang, Pengyu Kan, Earlence Fernandes, Somesh Jha
Deep Metric Learning (DML), a widely-used technique, involves learning a distance metric between pairs of samples.
no code implementations • 1 Jan 2021 • Thomas Kobber Panum, Zi Wang, Pengyu Kan, Earlence Fernandes, Somesh Jha
To the best of our knowledge, we are the first to systematically analyze this dependence effect and propose a principled approach for robust training of deep metric learning networks that accounts for the nuances of metric losses.
no code implementations • 16 Dec 2020 • Yuzhe ma, Jon Sharp, Ruizhe Wang, Earlence Fernandes, Xiaojin Zhu
In this paper, we study adversarial attacks on KF as part of the more complex machine-human hybrid system of Forward Collision Warning.
1 code implementation • 10 Dec 2020 • Yunang Chen, Amrita Roy Chowdhury, Ruizhe Wang, Andrei Sabelfeld, Rahul Chatterjee, Earlence Fernandes
Trigger-action platforms (TAPs) allow users to connect independent web-based or IoT services to achieve useful automation.
Cryptography and Security
2 code implementations • CVPR 2021 • Athena Sayles, Ashish Hooda, Mohit Gupta, Rahul Chatterjee, Earlence Fernandes
By contrast, we contribute a procedure to generate, for the first time, physical adversarial examples that are invisible to human eyes.
1 code implementation • 17 Feb 2020 • Ryan Feng, Neal Mangaokar, Jiefeng Chen, Earlence Fernandes, Somesh Jha, Atul Prakash
We address three key requirements for practical attacks for the real-world: 1) automatically constraining the size and shape of the attack so it can be applied with stickers, 2) transform-robustness, i. e., robustness of a attack to environmental physical variations such as viewpoint and lighting changes, and 3) supporting attacks in not only white-box, but also black-box hard-label scenarios, so that the adversary can attack proprietary models.
no code implementations • 27 May 2019 • Haizhong Zheng, Earlence Fernandes, Atul Prakash
Recently, interpretable models called self-explaining models (SEMs) have been proposed with the goal of providing interpretability robustness.
1 code implementation • 18 Sep 2018 • Z. Berkay Celik, Earlence Fernandes, Eric Pauley, Gang Tan, Patrick McDaniel
Based on a study of five IoT programming platforms, we identify the key insights resulting from works in both the program analysis and security communities and relate the efficacy of program-analysis techniques to security and privacy issues.
Cryptography and Security Programming Languages
no code implementations • 20 Jul 2018 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song
In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.
no code implementations • CVPR 2018 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.
no code implementations • 14 Jan 2018 • Amir Rahmati, Earlence Fernandes, Kevin Eykholt, Atul Prakash
When using risk-based permissions, device operations are grouped into units of similar risk, and users grant apps access to devices at that risk-based granularity.
Cryptography and Security
no code implementations • 21 Dec 2017 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer
Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples.
1 code implementation • 27 Jul 2017 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.
no code implementations • 23 May 2017 • Earlence Fernandes, Amir Rahmati, Kevin Eykholt, Atul Prakash
The Internet of Things (IoT) is a new computing paradigm that spans wearable devices, homes, hospitals, cities, transportation, and critical infrastructure.
Cryptography and Security