Search Results for author: JoonHo Jang

Found 9 papers, 3 papers with code

Unknown-Aware Domain Adversarial Learning for Open-Set Domain Adaptation

1 code implementation15 Jun 2022 JoonHo Jang, Byeonghu Na, DongHyeok Shin, Mingi Ji, Kyungwoo Song, Il-Chul Moon

Therefore, we propose Unknown-Aware Domain Adversarial Learning (UADAL), which $\textit{aligns}$ the source and the target-$\textit{known}$ distribution while simultaneously $\textit{segregating}$ the target-$\textit{unknown}$ distribution in the feature alignment procedure.

Domain Adaptation

LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning

no code implementations NeurIPS 2021 Yooon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon

Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high.

Active Learning Data Augmentation +1

Strong interlayer charge transfer due to exciton condensation in an electrically-isolated GaAs quantum well bilayer

no code implementations11 Mar 2021 JoonHo Jang, Heun Mo Yoo, Loren N. Pfeiffer, Kenneth W. West, K. W. Baldwin, Raymond C. Ashoori

With fully tunable densities of individual layers, the floating bilayer QW system provides a versatile platform to access previously unavailable information on the quantum phases in electron bilayer systems.

Mesoscale and Nanoscale Physics

Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder

no code implementations24 Nov 2020 Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon

Therefore, this paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling the exogenous uncertainty into two latent variables: either 1) independent to interventions or 2) correlated to interventions without causality.

Causal Inference Disentanglement +1

LADA: Look-Ahead Data Acquisition via Augmentation for Active Learning

no code implementations NeurIPS 2021 Yoon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon

Active learning effectively collects data instances for training deep learning models when the labeled dataset is limited and the annotation cost is high.

Active Learning Data Augmentation +1

Neutralizing Gender Bias in Word Embeddings with Latent Disentanglement and Counterfactual Generation

no code implementations Findings of the Association for Computational Linguistics 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

Disentanglement Word Embeddings

Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation

no code implementations7 Apr 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

Disentanglement Sentiment Analysis +1

Bivariate Beta-LSTM

1 code implementation25 May 2019 Kyungwoo Song, JoonHo Jang, Seung jae Shin, Il-Chul Moon

Long Short-Term Memory (LSTM) infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in [0, 1] through a sigmoid function.

Density Estimation General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.