Search Results for author: KiYoon Yoo

Found 12 papers, 6 papers with code

Open Domain Generalization with a Single Network by Regularization Exploiting Pre-trained Features

no code implementations8 Dec 2023 Inseop Chung, KiYoon Yoo, Nojun Kwak

To handle this task, the model has to learn a generalizable representation that can be applied to unseen domains while also identify unknown classes that were not present during training.

Domain Generalization

Advancing Beyond Identification: Multi-bit Watermark for Large Language Models

1 code implementation1 Aug 2023 KiYoon Yoo, Wonhyuk Ahn, Nojun Kwak

By independently embedding sub-units of messages, the proposed method outperforms the existing works in terms of robustness and latency.

Language Modelling Position

Robust Multi-bit Natural Language Watermarking through Invariant Features

1 code implementation3 May 2023 KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, Nojun Kwak

Recent years have witnessed a proliferation of valuable original natural language contents found in subscription-based media outlets, web novel platforms, and outputs of large language models.

Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling

no code implementations29 Apr 2022 KiYoon Yoo, Nojun Kwak

For a less complex dataset, a mere 0. 1% of adversary clients is enough to poison the global model effectively.

Federated Learning Model Poisoning +3

Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation

no code implementations3 Mar 2022 KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak

Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.

Adversarial Defense Density Estimation +3

Self-Distilled Self-Supervised Representation Learning

1 code implementation25 Nov 2021 Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak

Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an early-exited sub-network not much degraded from that of the full network.

Representation Learning Self-Supervised Learning

Self-Evolutionary Optimization for Pareto Front Learning

no code implementations7 Oct 2021 Simyung Chang, KiYoon Yoo, Jiho Jang, Nojun Kwak

Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume.

Multi-Task Learning

Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights

1 code implementation10 Sep 2021 Jangho Kim, Jayeon Yoo, Yeji Song, KiYoon Yoo, Nojun Kwak

To alleviate this problem, dynamic pruning methods have emerged, which try to find diverse sparsity patterns during training by utilizing Straight-Through-Estimator (STE) to approximate gradients of pruned weights.

Edge Bias in Federated Learning and its Solution by Buffered Knowledge Distillation

no code implementations20 Oct 2020 Sangho Lee, KiYoon Yoo, Nojun Kwak

Federated learning (FL), which utilizes communication between the server (core) and local devices (edges) to indirectly learn from more data, is an emerging field in deep learning research.

Federated Learning Knowledge Distillation

On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective

no code implementations9 Sep 2020 SeongUk Park, KiYoon Yoo, Nojun Kwak

In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically.

Data Augmentation Efficient Neural Network +2

Position-based Scaled Gradient for Model Quantization and Pruning

1 code implementation NeurIPS 2020 Jangho Kim, KiYoon Yoo, Nojun Kwak

Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning.

Model Compression Position +1

Cannot find the paper you are looking for? You can Submit a new open access paper.