no code implementations • 28 Dec 2022 • Sanghyun Hong, Nicholas Carlini, Alexey Kurakin
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
no code implementations • 22 Nov 2022 • Jaehoon Lee, Chan Kim, Gyumin Lee, Haksoo Lim, Jeongwhan Choi, Kookjin Lee, Dongeun Lee, Sanghyun Hong, Noseong Park
Forecasting future outcomes from recent time series data is not easy, especially when the future data are different from the past (i. e. time series are under temporal drifts).
no code implementations • 10 Oct 2022 • Fan Wu, Sanghyun Hong, Donsub Rim, Noseong Park, Kookjin Lee
However, parameterization of dynamics using a neural network makes it difficult for humans to identify causal structures in the data.
1 code implementation • 13 Jul 2022 • Suneghyeon Cho, Sanghyun Hong, Kookjin Lee, Noseong Park
In this work, we propose adaptive momentum estimation neural ODEs (AdamNODEs) that adaptively control the acceleration of the classical momentum-based approach.
no code implementations • 31 Mar 2022 • Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak significant private details of training points belonging to other parties.
1 code implementation • NeurIPS 2021 • Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yiğitcan Kaya, Tudor Dumitraş
Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger.
1 code implementation • 29 Sep 2021 • Barry W. Brook, Jessie C. Buettel, Sanghyun Hong
With 50% higher annual economic growth, population peaks even earlier, in 2056, and declines to below 8 billion by the end of the century.
1 code implementation • 28 Jun 2021 • Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr
We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models -- including models trained adaptively against the users' past attacks, or models that use technologies discovered after the attack.
no code implementations • 8 Jun 2021 • Sanghyun Hong, Nicholas Carlini, Alexey Kurakin
When machine learning training is outsourced to third parties, $backdoor$ $attacks$ become practical as the third party who trains the model may act maliciously to inject hidden behaviors into the otherwise accurate model.
1 code implementation • ICLR 2021 • Sanghyun Hong, Yiğitcan Kaya, Ionuţ-Vlad Modoranu, Tudor Dumitraş
We show that a slowdown attack reduces the efficacy of multi-exit DNNs by 90-100%, and it amplifies the latency by 1. 5-5$\times$ in a typical IoT deployment.
no code implementations • 9 Jun 2020 • Yigitcan Kaya, Sanghyun Hong, Tudor Dumitras
Finally, we quantify the opportunity of future MIAs to compromise privacy by designing a white-box `distance-to-confident' (DtC) metric, based on adversarial sample crafting.
1 code implementation • ICLR 2020 • Sanghyun Hong, Michael Davinroy, Yiǧitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş
New data processing pipelines and novel network architectures increasingly drive the success of deep learning.
1 code implementation • 26 Feb 2020 • Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot
In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks.
1 code implementation • 17 Feb 2020 • Sanghyun Hong, Michael Davinroy, Yiğitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş
This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service, the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels.
no code implementations • 3 Jun 2019 • Sanghyun Hong, Pietro Frigo, Yiğitcan Kaya, Cristiano Giuffrida, Tudor Dumitraş
Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e. g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy.
2 code implementations • 16 Oct 2018 • Yigitcan Kaya, Sanghyun Hong, Tudor Dumitras
Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification.
1 code implementation • ICLR 2019 • Sanghyun Hong, Michael Davinroy, Yiǧitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, Tudor Dumitraş
Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting.
no code implementations • 17 Jan 2017 • Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael Hicks, Tudor Dumitraş
Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency.