Search Results for author: KiYoon Yoo

Found 9 papers, 3 papers with code

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling

no code implementations29 Apr 2022 KiYoon Yoo, Nojun Kwak

For a less complex dataset, a mere 0. 1\% of adversary clients is enough to poison the global model effectively.

Federated Learning Model Poisoning +2

Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation

1 code implementation3 Mar 2022 KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak

Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.

Adversarial Defense Density Estimation +2

Self-Distilled Self-Supervised Representation Learning

no code implementations25 Nov 2021 Jiho Jang, Seonhoon Kim, KiYoon Yoo, Jangho Kim, Nojun Kwak

Motivated by self-distillation in the supervised regime, we further exploit this by allowing the intermediate representations to learn from the final layer via the contrastive loss.

Representation Learning Self-Supervised Learning

Self-Evolutionary Optimization for Pareto Front Learning

no code implementations7 Oct 2021 Simyung Chang, KiYoon Yoo, Jiho Jang, Nojun Kwak

Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume.

Multi-Task Learning

Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights

no code implementations10 Sep 2021 Jangho Kim, Jayeon Yoo, Yeji Song, KiYoon Yoo, Nojun Kwak

To alleviate this problem, dynamic pruning methods have emerged, which try to find diverse sparsity patterns during training by utilizing Straight-Through-Estimator (STE) to approximate gradients of pruned weights.

Edge Bias in Federated Learning and its Solution by Buffered Knowledge Distillation

no code implementations20 Oct 2020 Sangho Lee, KiYoon Yoo, Nojun Kwak

Federated learning (FL), which utilizes communication between the server (core) and local devices (edges) to indirectly learn from more data, is an emerging field in deep learning research.

Federated Learning Knowledge Distillation

On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective

no code implementations9 Sep 2020 SeongUk Park, KiYoon Yoo, Nojun Kwak

In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically.

Data Augmentation Knowledge Distillation +1

Position-based Scaled Gradient for Model Quantization and Pruning

1 code implementation NeurIPS 2020 Jangho Kim, KiYoon Yoo, Nojun Kwak

Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning.

Model Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.