Search Results for author: Yunheung Paek

Found 3 papers, 1 papers with code

Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices

no code implementations5 Mar 2024 Younghan Lee, Sohee Jun, Yungi Cho, Woorim Han, Hyungon Moon, Yunheung Paek

Most of those DL models are proprietary to the companies who thus strive to keep their private models safe from the model extraction attack (MEA), whose aim is to steal the model by training surrogate models.

Model extraction

FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models

1 code implementation5 Mar 2024 Younghan Lee, Yungi Cho, Woorim Han, Ho Bae, Yunheung Paek

However, recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model when adversaries, posed as benign clients, are present in a group of clients.

Contrastive Learning Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.