no code implementations • 31 May 2024 • SeungHwan An, Gyeongdong Woo, Jaesung Lim, Changhyun Kim, Sungchul Hong, Jong-June Jeon
In this paper, our goal is to generate synthetic data for heterogeneous (mixed-type) tabular datasets with high machine learning utility (MLu).
no code implementations • 30 May 2024 • Sungchul Hong, SeungHwan An, Jong-June Jeon
We investigate the problem of the generative model for imbalanced classification and introduce a framework to enhance the SMOTE algorithm using Variational Autoencoders (VAE).
no code implementations • 6 Dec 2023 • SeungHwan An, Sungchul Hong, Jong-June Jeon
This measure enables us to capture both marginal and joint distributional information simultaneously, as it incorporates a mixture measure with point masses on standard basis vectors.
no code implementations • 25 Oct 2023 • SeungHwan An, Jong-June Jeon
The assumption of conditional independence among observed variables, primarily used in the Variational Autoencoder (VAE) decoder modeling, has limitations when dealing with high-dimensional datasets or complex correlation structures among observed variables.
1 code implementation • 23 Feb 2023 • SeungHwan An, Kyungwoo Song, Jong-June Jeon
We present a new supervised learning technique for the Variational AutoEncoder (VAE) that allows it to learn a causally disentangled representation and generate causally disentangled outcomes simultaneously.
1 code implementation • NeurIPS 2023 • SeungHwan An, Jong-June Jeon
The Gaussianity assumption has been consistently criticized as a main limitation of the Variational Autoencoder (VAE) despite its efficiency in computational modeling.
1 code implementation • 23 May 2021 • SeungHwan An, Hosik Choi, Jong-June Jeon
To improve the performance of our VAE in a classification task without the loss of performance as a generative model, we employ a new semi-supervised classification method called SCI (Soft-label Consistency Interpolation).