Search Results for author: JoonHo Jang

Found 11 papers, 6 papers with code

Bivariate Beta-LSTM

1 code implementation25 May 2019 Kyungwoo Song, JoonHo Jang, Seung jae Shin, Il-Chul Moon

Long Short-Term Memory (LSTM) infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in [0, 1] through a sigmoid function.

Density Estimation General Classification +5

Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation

no code implementations7 Apr 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

counterfactual Disentanglement +2

Neutralizing Gender Bias in Word Embeddings with Latent Disentanglement and Counterfactual Generation

no code implementations Findings of the Association for Computational Linguistics 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

counterfactual Disentanglement +1

LADA: Look-Ahead Data Acquisition via Augmentation for Active Learning

no code implementations NeurIPS 2021 Yoon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon

Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high.

Active Learning Data Augmentation +1

Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder

no code implementations24 Nov 2020 Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon

Therefore, this paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling the exogenous uncertainty into two latent variables: either 1) independent to interventions or 2) correlated to interventions without causality.

Attribute Causal Inference +3

Strong interlayer charge transfer due to exciton condensation in an electrically-isolated GaAs quantum well bilayer

no code implementations11 Mar 2021 JoonHo Jang, Heun Mo Yoo, Loren N. Pfeiffer, Kenneth W. West, K. W. Baldwin, Raymond C. Ashoori

With fully tunable densities of individual layers, the floating bilayer QW system provides a versatile platform to access previously unavailable information on the quantum phases in electron bilayer systems.

Mesoscale and Nanoscale Physics

LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning

1 code implementation NeurIPS 2021 Yooon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon

Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high.

Active Learning Data Augmentation +1

Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation

1 code implementation15 Jun 2022 JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, Il-Chul Moon

Therefore, we propose Unknown-Aware Domain Adversarial Learning (UADAL), which $\textit{aligns}$ the source and the target-$\textit{known}$ distribution while simultaneously $\textit{segregating}$ the target-$\textit{unknown}$ distribution in the feature alignment procedure.

Domain Adaptation

SAAL: Sharpness-Aware Active Learning

1 code implementation Proceedings of the 40th International Conference on Machine Learning 2023 Yoon-Yeong Kim, Youngjae Cho, JoonHo Jang, Byeonghu Na, Yeongmin Kim, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

Specifically, our proposed method, Sharpness-Aware Active Learning (SAAL), constructs its acquisition function by selecting unlabeled instances whose perturbed loss becomes maximum.

Active Learning Image Classification +3

Training Unbiased Diffusion Models From Biased Dataset

1 code implementation2 Mar 2024 Yeongmin Kim, Byeonghu Na, Minsang Park, JoonHo Jang, Dongjun Kim, Wanmo Kang, Il-Chul Moon

While directly applying it to score-matching is intractable, we discover that using the time-dependent density ratio both for reweighting and score correction can lead to a tractable form of the objective function to regenerate the unbiased data density.

Cannot find the paper you are looking for? You can Submit a new open access paper.