Search Results for author: Taehyeon Kim

Found 20 papers, 11 papers with code

Towards Fast Inference: Exploring and Improving Blockwise Parallel Drafts

no code implementations14 Apr 2024 Taehyeon Kim, Ananda Theertha Suresh, Kishore Papineni, Michael Riley, Sanjiv Kumar, Adrian Benton

Despite the remarkable strides made by autoregressive language models, their potential is often hampered by the slow inference speeds inherent in sequential token generation.

Semantic Layering in Room Segmentation via LLMs

no code implementations19 Mar 2024 Taehyeon Kim, Byung-Cheol Min

In this paper, we introduce Semantic Layering in Room Segmentation via LLMs (SeLRoS), an advanced method for semantic room segmentation by integrating Large Language Models (LLMs) with traditional 2D map-based segmentation.

Segmentation

Non-linear Fusion in Federated Learning: A Hypernetwork Approach to Federated Domain Generalization

no code implementations10 Feb 2024 Marc Bartholet, Taehyeon Kim, Ami Beuret, Se-Young Yun, Joachim M. Buhmann

We propose an innovative federated algorithm, termed hFedF for hypernetwork-based Federated Fusion, designed to bridge the performance gap between generalization and personalization, capable of addressing various degrees of domain shift.

Domain Generalization Federated Learning

Revisiting Early-Learning Regularization When Federated Learning Meets Noisy Labels

no code implementations8 Feb 2024 Taehyeon Kim, Donggyu Kim, Se-Young Yun

In the evolving landscape of federated learning (FL), addressing label noise presents unique challenges due to the decentralized and diverse nature of data collection across clients.

Federated Learning Memorization

Leveraging Normalization Layer in Adapters With Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning

1 code implementation18 Dec 2023 Yongjin Yang, Taehyeon Kim, Se-Young Yun

Second, to address the pitfalls of noisy statistics, we deploy two strategies: a progressive training of the two adapters and an adaptive distillation technique derived from features determined by the model solely with the adapter devoid of a normalization layer.

cross-domain few-shot learning

Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions

1 code implementation1 Nov 2023 Taehyeon Kim, Joonkee Kim, Gihun Lee, Se-Young Yun

Notably, utilizing 'opposite' as the noisy instruction in ID, which exhibits the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks.

Few-Shot NLI Instruction Following +2

Navigating Data Heterogeneity in Federated Learning A Semi-Supervised Federated Object Detection

1 code implementation26 Oct 2023 Taehyeon Kim, Eric Lin, Junu Lee, Christian Lau, Vaikkunth Mugunthan

Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources while maintaining data privacy.

Autonomous Driving Federated Learning +5

Region-Conditioned Orthogonal 3D U-Net for Weather4Cast Competition

2 code implementations5 Dec 2022 Taehyeon Kim, Shinhwan Kang, Hyeonjeong Shin, Deukryeol Yoon, Seongha Eom, Kijung Shin, Se-Young Yun

The Weather4Cast competition (hosted by NeurIPS 2022) required competitors to predict super-resolution rain movies in various regions of Europe when low-resolution satellite contexts covering wider regions are given.

Data Augmentation Super-Resolution

Benchmark Dataset for Precipitation Forecasting by Post-Processing the Numerical Weather Prediction

1 code implementation30 Jun 2022 Taehyeon Kim, Namgyu Ho, Donggyu Kim, Se-Young Yun

Historically, this challenge has been tackled using numerical weather prediction (NWP) models, grounded on physics-based simulations.

Computational Efficiency Precipitation Forecasting

Revisiting Orthogonality Regularization: A Study for Convolutional Neural Networks in Image Classification

1 code implementation IEEE Access 2022 Taehyeon Kim, Se-Young Yun

Recent research in deep Convolutional Neural Networks(CNN) faces the challenges of vanishing/exploding gradient issues, training instability, and feature redundancy.

Image Classification

Supernet Training for Federated Image Classification under System Heterogeneity

1 code implementation3 Jun 2022 Taehyeon Kim, Se-Young Yun

The approach is inspired by observing that averaging parameters during model aggregation for FL is similar to weight-sharing in supernet training.

Classification Federated Learning +2

Mold into a Graph: Efficient Bayesian Optimization over Mixed-Spaces

1 code implementation2 Feb 2022 Jaeyeon Ahn, Taehyeon Kim, Seyoung Yun

Real-world optimization problems are generally not just black-box problems, but also involve mixed types of inputs in which discrete and continuous variables coexist.

Bayesian Optimization Computational Efficiency +1

Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation

1 code implementation19 May 2021 Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun

From this observation, we consider an intuitive KD loss function, the mean squared error (MSE) between the logit vectors, so that the student model can directly learn the logit of the teacher model.

Knowledge Distillation Learning with noisy labels

FINE Samples for Learning with Noisy Labels

1 code implementation NeurIPS 2021 Taehyeon Kim, Jongwoo Ko, Sangwook Cho, Jinhwan Choi, Se-Young Yun

Our framework, coined as filtering noisy instances via their eigenvectors (FINE), provides a robust detector with derivative-free simple methods having theoretical guarantees.

General Classification Learning with noisy labels

Understanding Knowledge Distillation

no code implementations1 Jan 2021 Taehyeon Kim, Jaehoon Oh, Nakyil Kim, Sangwook Cho, Se-Young Yun

To verify this conjecture, we test an extreme logit learning model, where the KD is implemented with Mean Squared Error (MSE) between the student's logit and the teacher's logit.

Knowledge Distillation

Adaptive Local Bayesian Optimization Over Multiple Discrete Variables

no code implementations7 Dec 2020 Taehyeon Kim, Jaeyeon Ahn, Nakyil Kim, Seyoung Yun

In the machine learning algorithms, the choice of the hyperparameter is often an art more than a science, requiring labor-intensive search with expert experience.

Bayesian Optimization BIG-bench Machine Learning +1

Efficient Model for Image Classification With Regularization Tricks

1 code implementation1 Feb 2020 Taehyeon Kim, Jonghyup Kim, Seyoung Yun

Our final score is 0. 0054, which represents 370x improvements over the baseline for the CIFAR100 dataset.

Classification Data Augmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.