Search Results for author: Seungjae Shin

Found 12 papers, 5 papers with code

ABC: Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning

1 code implementation NeurIPS 2021 Hyuck Lee, Seungjae Shin, Heeyoung Kim

The ABC is trained with a class-balanced loss of a minibatch, while using high-quality representations learned from all data points in the minibatch using the backbone SSL algorithm to avoid overfitting and information loss. Moreover, we use consistency regularization, a recent SSL technique for utilizing unlabeled data in a modified way, to train the ABC to be balanced among the classes by selecting unlabeled data with the same probability for each class.

High Precision Score-based Diffusion Models

no code implementations29 Sep 2021 Dongjun Kim, Seungjae Shin, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

From the theory side, the difficulty arises in estimating the high precision diffusion because the data score goes to $\infty$ as $t \rightarrow 0$ of the diffusion time.

Image Generation

Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation

1 code implementation10 Jun 2021 Dongjun Kim, Seungjae Shin, Kyungwoo Song, Wanmo Kang, Il-Chul Moon

This paper investigates with sufficient empirical evidence that such inverse correlation happens because density estimation is significantly contributed by small diffusion time, whereas sample generation mainly depends on large diffusion time.

 Ranked #1 on Image Generation on CIFAR-10 (Inception score metric)

Density Estimation Image Generation

Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation

1 code implementation CVPR 2021 Mingi Ji, Seungjae Shin, Seunghyun Hwang, Gibeom Park, Il-Chul Moon

Knowledge distillation is a method of transferring the knowledge from a pretrained complex teacher model to a student model, so a smaller network can replace a large teacher network at the deployment stage.

Data Augmentation object-detection +3

Posterior-Aided Regularization for Likelihood-Free Inference

1 code implementation15 Feb 2021 Dongjun Kim, Kyungwoo Song, Seungjae Shin, Wanmo Kang, Il-Chul Moon

Because of the estimation intractability of PAR, we provide a unified estimation method of PAR to estimate both reverse KL term and mutual information term with a single neural network.

Generalized Gumbel-Softmax Gradient Estimator for Generic Discrete Random Variables

no code implementations1 Jan 2021 Weonyoung Joo, Dongjun Kim, Seungjae Shin, Il-Chul Moon

Estimating the gradients of stochastic nodes, which enables the gradient descent optimization on neural network parameters, is one of the crucial research questions in the deep generative modeling community.

Topic Models

Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder

no code implementations24 Nov 2020 Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon

Therefore, this paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling the exogenous uncertainty into two latent variables: either 1) independent to interventions or 2) correlated to interventions without causality.

Causal Inference Disentanglement +1

Neutralizing Gender Bias in Word Embeddings with Latent Disentanglement and Counterfactual Generation

no code implementations Findings of the Association for Computational Linguistics 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

Disentanglement Word Embeddings

Adversarial Likelihood-Free Inference on Black-Box Generator

no code implementations13 Apr 2020 Dongjun Kim, Weonyoung Joo, Seungjae Shin, Kyungwoo Song, Il-Chul Moon

Generative Adversarial Network (GAN) can be viewed as an implicit estimator of a data distribution, and this perspective motivates using the adversarial concept in the true input parameter estimation of black-box generators.

Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation

no code implementations7 Apr 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

Disentanglement Sentiment Analysis +1

Generalized Gumbel-Softmax Gradient Estimator for Various Discrete Random Variables

no code implementations4 Mar 2020 Weonyoung Joo, Dongjun Kim, Seungjae Shin, Il-Chul Moon

Estimating the gradients of stochastic nodes is one of the crucial research questions in the deep generative modeling community, which enables the gradient descent optimization on neural network parameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.