Search Results for author: Kyungwoo Song

Found 31 papers, 15 papers with code

Mitigating the Linguistic Gap with Phonemic Representations for Robust Multilingual Language Understanding

no code implementations22 Feb 2024 Haeji Jung, Changdae Oh, Jooeon Kang, Jimin Sohn, Kyungwoo Song, Jinkyu Kim, David R. Mortensen

Approaches to improving multilingual language understanding often require multiple languages during the training phase, rely on complicated training techniques, and -- importantly -- struggle with significant performance gaps between high-resource and low-resource languages.

Towards Calibrated Robust Fine-Tuning of Vision-Language Models

no code implementations3 Nov 2023 Changdae Oh, Hyesu Lim, Mijoo Kim, Jaegul Choo, Alexander Hauptmann, Zhi-Qi Cheng, Kyungwoo Song

Robust fine-tuning aims to ensure performance on out-of-distribution (OOD) samples, which is sometimes compromised by pursuing adaptation on in-distribution (ID) samples.

Autonomous Driving Medical Diagnosis

Leveraging Skill-to-Skill Supervision for Knowledge Tracing

no code implementations12 Jun 2023 Hyeondey Kim, Jinwoo Nam, Minjae Lee, Yun Jegal, Kyungwoo Song

To do so, knowledge tracing systems should trace the knowledge state of the students by utilizing their problem-solving history and knowledge about the problems.

Knowledge Tracing

RPLKG: Robust Prompt Learning with Knowledge Graph

no code implementations21 Apr 2023 Yewon Kim, Yongtaek Lim, Dokyung Yoon, Kyungwoo Song

To improve the generalization performance on few-shot learning, there have been diverse efforts, such as prompt learning and adapter.

Domain Generalization Few-Shot Learning +1

BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning

1 code implementation CVPR 2023 Changdae Oh, Hyeji Hwang, Hee-young Lee, Yongtaek Lim, Geunyoung Jung, Jiyoung Jung, Hosik Choi, Kyungwoo Song

In this work, we propose black-box visual prompting (BlackVIP), which efficiently adapts the PTMs without knowledge about model architectures and parameters.

Transfer Learning Visual Prompting

Causally Disentangled Generative Variational AutoEncoder

1 code implementation23 Feb 2023 SeungHwan An, Kyungwoo Song, Jong-June Jeon

We present a new supervised learning technique for the Variational AutoEncoder (VAE) that allows it to learn a causally disentangled representation and generate causally disentangled outcomes simultaneously.

Disentanglement

SAAL: Sharpness-Aware Active Learning

1 code implementation Proceedings of the 40th International Conference on Machine Learning 2023 Yoon-Yeong Kim, Youngjae Cho, JoonHo Jang, Byeonghu Na, Yeongmin Kim, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

Specifically, our proposed method, Sharpness-Aware Active Learning (SAAL), constructs its acquisition function by selecting unlabeled instances whose perturbed loss becomes maximum.

Active Learning Image Classification +3

Sufficient Invariant Learning for Distribution Shift

no code implementations24 Oct 2022 Taero Kim, Sungjun Lim, Kyungwoo Song

Moreover, we propose a new algorithm, Adaptive Sharpness-aware Group Distributionally Robust Optimization (ASGDRO), to learn sufficient invariant features across domains or groups.

Data Augmentation

Graph Perceiver IO: A General Architecture for Graph Structured Data

no code implementations14 Sep 2022 Seyun Bae, Hoyoon Byun, Changdae Oh, Yoon-Sik Cho, Kyungwoo Song

A graph has an adjacency matrix different from other dataset domains such as text and image, and it is not trivial to handle the topological information, relational information, and canonical positional information.

Graph Classification Link Prediction +1

Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation

1 code implementation15 Jun 2022 JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, Il-Chul Moon

Therefore, we propose Unknown-Aware Domain Adversarial Learning (UADAL), which $\textit{aligns}$ the source and the target-$\textit{known}$ distribution while simultaneously $\textit{segregating}$ the target-$\textit{unknown}$ distribution in the feature alignment procedure.

Domain Adaptation

High Precision Score-based Diffusion Models

no code implementations29 Sep 2021 Dongjun Kim, Seungjae Shin, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

From the theory side, the difficulty arises in estimating the high precision diffusion because the data score goes to $\infty$ as $t \rightarrow 0$ of the diffusion time.

Image Generation Vocal Bursts Intensity Prediction

Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation

1 code implementation10 Jun 2021 Dongjun Kim, Seungjae Shin, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

This paper investigates with sufficient empirical evidence that such inverse correlation happens because density estimation is significantly contributed by small diffusion time, whereas sample generation mainly depends on large diffusion time.

Ranked #2 on Image Generation on CIFAR-10 (Inception score metric)

Density Estimation Image Generation

LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning

1 code implementation NeurIPS 2021 Yooon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon

Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high.

Active Learning Data Augmentation +1

Neural Posterior Regularization for Likelihood-Free Inference

1 code implementation15 Feb 2021 Dongjun Kim, Kyungwoo Song, Seungjae Shin, Wanmo Kang, Il-Chul Moon, Weonyoung Joo

A simulation is useful when the phenomenon of interest is either expensive to regenerate or irreproducible with the same context.

Bayesian Inference

Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder

no code implementations24 Nov 2020 Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon

Therefore, this paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling the exogenous uncertainty into two latent variables: either 1) independent to interventions or 2) correlated to interventions without causality.

Attribute Causal Inference +3

LADA: Look-Ahead Data Acquisition via Augmentation for Active Learning

no code implementations NeurIPS 2021 Yoon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon

Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high.

Active Learning Data Augmentation +1

Neutralizing Gender Bias in Word Embeddings with Latent Disentanglement and Counterfactual Generation

no code implementations Findings of the Association for Computational Linguistics 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

counterfactual Disentanglement +1

Sequential Likelihood-Free Inference with Neural Proposal

1 code implementation15 Oct 2020 Dongjun Kim, Kyungwoo Song, YoonYeong Kim, Yongjin Shin, Wanmo Kang, Il-Chul Moon, Weonyoung Joo

This paper introduces a new sampling approach, called Neural Proposal (NP), of the simulation input that resolves the biased data collection as it guarantees the i. i. d.

Bayesian Inference

Approximate Inference for Spectral Mixture Kernel

no code implementations12 Jun 2020 Yohan Jung, Kyungwoo Song, Jinkyoo Park

To improve the training, we propose an approximate Bayesian inference for the SM kernel.

Bayesian Inference Variational Inference

Implicit Kernel Attention

no code implementations11 Jun 2020 Kyungwoo Song, Yohan Jung, Dongjun Kim, Il-Chul Moon

For the attention in Transformer and GAT, we derive that the attention is a product of two parts: 1) the RBF kernel to measure the similarity of two instances and 2) the exponential of $L^{2}$ norm to compute the importance of individual instances.

Graph Attention Node Classification +2

Adversarial Likelihood-Free Inference on Black-Box Generator

no code implementations13 Apr 2020 Dongjun Kim, Weonyoung Joo, Seungjae Shin, Kyungwoo Song, Il-Chul Moon

Generative Adversarial Network (GAN) can be viewed as an implicit estimator of a data distribution, and this perspective motivates using the adversarial concept in the true input parameter estimation of black-box generators.

Generative Adversarial Network

Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation

no code implementations7 Apr 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

counterfactual Disentanglement +2

Sequential Recommendation with Relation-Aware Kernelized Self-Attention

no code implementations15 Nov 2019 Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon

This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics.

Relation Sequential Recommendation

Bivariate Beta-LSTM

1 code implementation25 May 2019 Kyungwoo Song, JoonHo Jang, Seung jae Shin, Il-Chul Moon

Long Short-Term Memory (LSTM) infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in [0, 1] through a sigmoid function.

Density Estimation General Classification +5

Neural Ideal Point Estimation Network

1 code implementation26 Apr 2019 Kyungwoo Song, Wonsung Lee, Il-Chul Moon

Understanding politics is challenging because the politics take the influence from everything.

Hierarchical Context enabled Recurrent Neural Network for Recommendation

1 code implementation26 Apr 2019 Kyungwoo Song, Mingi Ji, Sungrae Park, Il-Chul Moon

The analyses on the user history require the robust sequential model to anticipate the transitions and the decays of user interests.

Sequential Recommendation

Adversarial Dropout for Recurrent Neural Networks

2 code implementations22 Apr 2019 Sungrae Park, Kyungwoo Song, Mingi Ji, Wonsung Lee, Il-Chul Moon

Successful application processing sequential data, such as text and speech, requires an improved generalization performance of recurrent neural networks (RNNs).

Language Modelling Semi-Supervised Text Classification

Hierarchically Clustered Representation Learning

no code implementations ICLR 2019 Su-Jin Shin, Kyungwoo Song, Il-Chul Moon

The joint optimization of representation learning and clustering in the embedding space has experienced a breakthrough in recent years.

Clustering Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.