Search Results for author: Jung-eun Kim

Found 6 papers, 1 papers with code

Center-Based Relaxed Learning Against Membership Inference Attacks

no code implementations26 Apr 2024 Xingli Fang, Jung-eun Kim

Membership inference attacks (MIAs) are currently considered one of the main privacy attack strategies, and their defense mechanisms have also been extensively explored.

The Over-Certainty Phenomenon in Modern UDA Algorithms

no code implementations24 Apr 2024 Fin Amin, Jung-eun Kim

When neural networks are confronted with unfamiliar data that deviate from their training set, this signifies a domain shift.

Navigate Unsupervised Domain Adaptation

ReffAKD: Resource-efficient Autoencoder-based Knowledge Distillation

1 code implementation15 Apr 2024 Divyang Doshi, Jung-eun Kim

In our work, we propose an efficient method for generating these soft labels, thereby eliminating the need for a large teacher model.

Knowledge Distillation

Cooperative Learning for Cost-Adaptive Inference

no code implementations13 Dec 2023 Xingli Fang, Richard Bradford, Jung-eun Kim

The Teammate nets derive sub-networks and transfer knowledge to them, and to each other, while the Leader net guides Teammate nets to ensure accuracy.

Knowledge Distillation

Pruning has a disparate impact on model accuracy

no code implementations26 May 2022 Cuong Tran, Ferdinando Fioretto, Jung-eun Kim, Rakshit Naidu

Network pruning is a widely-used compression technique that is able to significantly scale down overparameterized models with minimal loss of accuracy.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.