2 code implementations • 16 Oct 2018 • Yigitcan Kaya, Sanghyun Hong, Tudor Dumitras
Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification.
1 code implementation • ICLR 2019 • Sanghyun Hong, Michael Davinroy, Yiǧitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, Tudor Dumitraş
Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting.
1 code implementation • ICLR 2021 • Sanghyun Hong, Yiğitcan Kaya, Ionuţ-Vlad Modoranu, Tudor Dumitraş
We show that a slowdown attack reduces the efficacy of multi-exit DNNs by 90-100%, and it amplifies the latency by 1. 5-5$\times$ in a typical IoT deployment.
1 code implementation • 28 Jun 2021 • Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr
We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models -- including models trained adaptively against the users' past attacks, or models that use technologies discovered after the attack.
1 code implementation • NeurIPS 2021 • Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yiğitcan Kaya, Tudor Dumitraş
Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger.
1 code implementation • 26 Feb 2020 • Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot
In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks.
1 code implementation • 13 Jul 2022 • Suneghyeon Cho, Sanghyun Hong, Kookjin Lee, Noseong Park
In this work, we propose adaptive momentum estimation neural ODEs (AdamNODEs) that adaptively control the acceleration of the classical momentum-based approach.
1 code implementation • 17 Feb 2020 • Sanghyun Hong, Michael Davinroy, Yiğitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş
This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service, the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels.
1 code implementation • ICLR 2020 • Sanghyun Hong, Michael Davinroy, Yiǧitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş
New data processing pipelines and novel network architectures increasingly drive the success of deep learning.
no code implementations • 17 Jan 2017 • Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael Hicks, Tudor Dumitraş
Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency.
no code implementations • 3 Jun 2019 • Sanghyun Hong, Pietro Frigo, Yiğitcan Kaya, Cristiano Giuffrida, Tudor Dumitraş
Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e. g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy.
no code implementations • 9 Jun 2020 • Yigitcan Kaya, Sanghyun Hong, Tudor Dumitras
Finally, we quantify the opportunity of future MIAs to compromise privacy by designing a white-box `distance-to-confident' (DtC) metric, based on adversarial sample crafting.
no code implementations • 8 Jun 2021 • Sanghyun Hong, Nicholas Carlini, Alexey Kurakin
When machine learning training is outsourced to third parties, $backdoor$ $attacks$ become practical as the third party who trains the model may act maliciously to inject hidden behaviors into the otherwise accurate model.
1 code implementation • 29 Sep 2021 • Barry W. Brook, Jessie C. Buettel, Sanghyun Hong
With 50% higher annual economic growth, population peaks even earlier, in 2056, and declines to below 8 billion by the end of the century.
no code implementations • 31 Mar 2022 • Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak significant private details of training points belonging to other parties.
no code implementations • 10 Oct 2022 • Fan Wu, Sanghyun Hong, Donsub Rim, Noseong Park, Kookjin Lee
However, parameterization of dynamics using a neural network makes it difficult for humans to identify causal structures in the data.
no code implementations • 22 Nov 2022 • Jaehoon Lee, Chan Kim, Gyumin Lee, Haksoo Lim, Jeongwhan Choi, Kookjin Lee, Dongeun Lee, Sanghyun Hong, Noseong Park
Forecasting future outcomes from recent time series data is not easy, especially when the future data are different from the past (i. e. time series are under temporal drifts).
no code implementations • 28 Dec 2022 • Sanghyun Hong, Nicholas Carlini, Alexey Kurakin
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
no code implementations • 16 Dec 2023 • Woojin Cho, Seunghyeon Cho, Hyundong Jin, Jinsung Jeon, Kookjin Lee, Sanghyun Hong, Dongeun Lee, Jonghyun Choi, Noseong Park
Neural ordinary differential equations (NODEs), one of the most influential works of the differential equation-based deep learning, are to continuously generalize residual networks and opened a new field.
no code implementations • 20 Feb 2024 • Jinsung Jeon, Hyundong Jin, Jonghyun Choi, Sanghyun Hong, Dongeun Lee, Kookjin Lee, Noseong Park
Extensively evaluating methods with seven image recognition benchmarks, we show that the proposed PAC-FNO improves the performance of existing baseline models on images with various resolutions by up to 77. 1% and various types of natural variations in the images at inference.
no code implementations • 18 Mar 2024 • Sanghyun Hong, Nicholas Carlini, Alexey Kurakin
We present a certified defense to clean-label poisoning attacks.
1 code implementation • 19 Mar 2024 • Ojas Nimase, Sanghyun Hong
In this work, we explore the improvements that existing methods bring by incorporating more contexts into a model.
no code implementations • 1 Apr 2024 • Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, Nicholas Carlini
In this paper, we unveil a new vulnerability: the privacy backdoor attack.