Search Results for author: Dan Boneh

Found 13 papers, 9 papers with code

Optimistic Verifiable Training by Controlling Hardware Nondeterminism

no code implementations14 Mar 2024 Megha Srivastava, Simran Arora, Dan Boneh

The increasing compute demands of AI systems has led to the emergence of services that train models on behalf of clients lacking necessary resources.

Data Poisoning

FairProof : Confidential and Certifiable Fairness for Neural Networks

no code implementations19 Feb 2024 Chhavi Yadav, Amrita Roy Chowdhury, Dan Boneh, Kamalika Chaudhuri

To this end, we propose FairProof - a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality.

Fairness

Differentially Private Learning Needs Better Features (or Much More Data)

2 code implementations ICLR 2021 Florian Tramèr, Dan Boneh

We demonstrate that differentially private machine learning has not yet reached its "AlexNet moment" on many canonical vision tasks: linear models trained on handcrafted features significantly outperform end-to-end deep neural networks for moderate privacy budgets.

BIG-bench Machine Learning

Express: Lowering the Cost of Metadata-hiding Communication with Cryptographic Privacy

1 code implementation20 Nov 2019 Saba Eskandarian, Henry Corrigan-Gibbs, Matei Zaharia, Dan Boneh

Existing systems for metadata-hiding messaging that provide cryptographic privacy properties have either high communication costs, high computation costs, or both.

Cryptography and Security

How Relevant is the Turing Test in the Age of Sophisbots?

no code implementations30 Aug 2019 Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot

Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian.

Cultural Vocal Bursts Intensity Prediction

Adversarial Training and Robustness for Multiple Perturbations

1 code implementation NeurIPS 2019 Florian Tramèr, Dan Boneh

Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e. g., small $\ell_\infty$-noise).

Adversarial Robustness

SentiNet: Detecting Physical Attacks Against Deep Learning Systems

1 code implementation2 Dec 2018 Edward Chou, Florian Tramèr, Giancarlo Pellegrino, Dan Boneh

By leveraging the neural network's susceptibility to attacks and by using techniques from model interpretability and object detection as detection mechanisms, SentiNet turns a weakness of a model into a strength.

Cryptography and Security

AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning

1 code implementation8 Nov 2018 Florian Tramèr, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino, Dan Boneh

On the other, we present a concrete set of attacks on visual ad-blockers by constructing adversarial examples in a real web page context.

BIG-bench Machine Learning Blocking

Fidelius: Protecting User Secrets from Compromised Browsers

1 code implementation13 Sep 2018 Saba Eskandarian, Jonathan Cogan, Sawyer Birnbaum, Peh Chang Wei Brandon, Dillon Franke, Forest Fraser, Gaspar Garcia Jr., Eric Gong, Hung T. Nguyen, Taresh K. Sethi, Vishal Subbiah, Michael Backes, Giancarlo Pellegrino, Dan Boneh

In this work, we present Fidelius, a new architecture that uses trusted hardware enclaves integrated into the browser to enable protection of user secrets during web browsing sessions, even if the entire underlying browser and OS are fully controlled by a malicious attacker.

Cryptography and Security

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

1 code implementation ICLR 2019 Florian Tramèr, Dan Boneh

As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations.

Ensemble Adversarial Training: Attacks and Defenses

11 code implementations ICLR 2018 Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss.

The Space of Transferable Adversarial Examples

2 code implementations11 Apr 2017 Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time.

Mobile Device Identification via Sensor Fingerprinting

no code implementations6 Aug 2014 Hristo Bojinov, Yan Michalevsky, Gabi Nakibly, Dan Boneh

We demonstrate how the multitude of sensors on a smartphone can be used to construct a reliable hardware fingerprint of the phone.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.