Search Results for author: Sanghyun Hong

Found 23 papers, 11 papers with code

Shallow-Deep Networks: Understanding and Mitigating Network Overthinking

2 code implementations16 Oct 2018 Yigitcan Kaya, Sanghyun Hong, Tudor Dumitras

Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification.

Image Classification

Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks

1 code implementation ICLR 2019 Sanghyun Hong, Michael Davinroy, Yiǧitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, Tudor Dumitraş

Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting.

Transfer Learning

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

1 code implementation ICLR 2021 Sanghyun Hong, Yiğitcan Kaya, Ionuţ-Vlad Modoranu, Tudor Dumitraş

We show that a slowdown attack reduces the efficacy of multi-exit DNNs by 90-100%, and it amplifies the latency by 1. 5-5$\times$ in a typical IoT deployment.

Image Classification

Data Poisoning Won't Save You From Facial Recognition

1 code implementation28 Jun 2021 Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr

We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models -- including models trained adaptively against the users' past attacks, or models that use technologies discovered after the attack.

Data Poisoning

Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes

1 code implementation NeurIPS 2021 Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yiğitcan Kaya, Tudor Dumitraş

Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger.

Backdoor Attack Federated Learning +1

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

1 code implementation26 Feb 2020 Sanghyun Hong, Varun Chandrasekaran, Yiğitcan Kaya, Tudor Dumitraş, Nicolas Papernot

In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks.

Data Poisoning

AdamNODEs: When Neural ODE Meets Adaptive Moment Estimation

1 code implementation13 Jul 2022 Suneghyeon Cho, Sanghyun Hong, Kookjin Lee, Noseong Park

In this work, we propose adaptive momentum estimation neural ODEs (AdamNODEs) that adaptively control the acceleration of the classical momentum-based approach.

Computational Efficiency

How to 0wn NAS in Your Spare Time

1 code implementation17 Feb 2020 Sanghyun Hong, Michael Davinroy, Yiğitcan Kaya, Dana Dachman-Soled, Tudor Dumitraş

This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service, the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels.

Malware Detection Neural Architecture Search

Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning

no code implementations17 Jan 2017 Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael Hicks, Tudor Dumitraş

Governments and businesses increasingly rely on data analytics and machine learning (ML) for improving their competitive edge in areas such as consumer satisfaction, threat intelligence, decision making, and product efficiency.

BIG-bench Machine Learning Decision Making

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks

no code implementations3 Jun 2019 Sanghyun Hong, Pietro Frigo, Yiğitcan Kaya, Cristiano Giuffrida, Tudor Dumitraş

Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e. g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy.

General Classification Image Classification

On the Effectiveness of Regularization Against Membership Inference Attacks

no code implementations9 Jun 2020 Yigitcan Kaya, Sanghyun Hong, Tudor Dumitras

Finally, we quantify the opportunity of future MIAs to compromise privacy by designing a white-box `distance-to-confident' (DtC) metric, based on adversarial sample crafting.

Image Classification Inference Attack +1

Handcrafted Backdoors in Deep Neural Networks

no code implementations8 Jun 2021 Sanghyun Hong, Nicholas Carlini, Alexey Kurakin

When machine learning training is outsourced to third parties, $backdoor$ $attacks$ become practical as the third party who trains the model may act maliciously to inject hidden behaviors into the otherwise accurate model.

Backdoor Attack

Constrained scenarios for twenty-first century human population size based on the empirical coupling to economic growth

1 code implementation29 Sep 2021 Barry W. Brook, Jessie C. Buettel, Sanghyun Hong

With 50% higher annual economic growth, population peaks even earlier, in 2056, and declines to below 8 billion by the end of the century.

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

no code implementations31 Mar 2022 Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini

We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak significant private details of training points belonging to other parties.

Attribute BIG-bench Machine Learning

Mining Causality from Continuous-time Dynamics Models: An Application to Tsunami Forecasting

no code implementations10 Oct 2022 Fan Wu, Sanghyun Hong, Donsub Rim, Noseong Park, Kookjin Lee

However, parameterization of dynamics using a neural network makes it difficult for humans to identify causal structures in the data.

Time Series Time Series Analysis

Time Series Forecasting with Hypernetworks Generating Parameters in Advance

no code implementations22 Nov 2022 Jaehoon Lee, Chan Kim, Gyumin Lee, Haksoo Lim, Jeongwhan Choi, Kookjin Lee, Dongeun Lee, Sanghyun Hong, Noseong Park

Forecasting future outcomes from recent time series data is not easy, especially when the future data are different from the past (i. e. time series are under temporal drifts).

Time Series Time Series Forecasting

Publishing Efficient On-device Models Increases Adversarial Vulnerability

no code implementations28 Dec 2022 Sanghyun Hong, Nicholas Carlini, Alexey Kurakin

We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.

Quantization

Operator-learning-inspired Modeling of Neural Ordinary Differential Equations

no code implementations16 Dec 2023 Woojin Cho, Seunghyeon Cho, Hyundong Jin, Jinsung Jeon, Kookjin Lee, Sanghyun Hong, Dongeun Lee, Jonghyun Choi, Noseong Park

Neural ordinary differential equations (NODEs), one of the most influential works of the differential equation-based deep learning, are to continuously generalize residual networks and opened a new field.

Image Classification Image Generation +3

PAC-FNO: Parallel-Structured All-Component Fourier Neural Operators for Recognizing Low-Quality Images

no code implementations20 Feb 2024 Jinsung Jeon, Hyundong Jin, Jonghyun Choi, Sanghyun Hong, Dongeun Lee, Kookjin Lee, Noseong Park

Extensively evaluating methods with seven image recognition benchmarks, we show that the proposed PAC-FNO improves the performance of existing baseline models on images with various resolutions by up to 77. 1% and various types of natural variations in the images at inference.

When Do "More Contexts" Help with Sarcasm Recognition?

1 code implementation19 Mar 2024 Ojas Nimase, Sanghyun Hong

In this work, we explore the improvements that existing methods bring by incorporating more contexts into a model.

Cannot find the paper you are looking for? You can Submit a new open access paper.