no code implementations • 2 Jan 2025 • Mose Park, Yunjin Choi, Jong-June Jeon
Our analysis addresses two key questions: (1) Is the difference in latent community structures between $\mathcal{O}$ and $\mathcal{G}$ the same as that between $\mathcal{G}$ and $\mathcal{S}$?
no code implementations • 31 May 2024 • SeungHwan An, Gyeongdong Woo, Jaesung Lim, Changhyun Kim, Sungchul Hong, Jong-June Jeon
In this paper, our goal is to generate synthetic data for heterogeneous (mixed-type) tabular datasets with high machine learning utility (MLu).
no code implementations • 30 May 2024 • Sungchul Hong, SeungHwan An, Jong-June Jeon
We investigate the problem of the generative model for imbalanced classification and introduce a framework to enhance the SMOTE algorithm using Variational Autoencoders (VAE).
1 code implementation • 7 May 2024 • JaeSung Park, Sungchul Hong, Yoonseo Cho, Jong-June Jeon
Sea ice at the North Pole is vital to global climate dynamics.
no code implementations • 6 Dec 2023 • SeungHwan An, Sungchul Hong, Jong-June Jeon
This measure enables us to capture both marginal and joint distributional information simultaneously, as it incorporates a mixture measure with point masses on standard basis vectors.
no code implementations • 25 Oct 2023 • SeungHwan An, Jong-June Jeon
The assumption of conditional independence among observed variables, primarily used in the Variational Autoencoder (VAE) decoder modeling, has limitations when dealing with high-dimensional datasets or complex correlation structures among observed variables.
no code implementations • 2 Mar 2023 • Sungchul Hong, Jong-June Jeon
However, estimating an optimal portfolio assessed by a pessimistic risk is still challenging due to the absence of a computationally tractable model.
no code implementations • 28 Feb 2023 • Sunghcul Hong, Yunjin Choi, Jong-June Jeon
Accurate forecasting of river water levels is vital for effectively managing traffic flow and mitigating the risks associated with natural disasters.
1 code implementation • 23 Feb 2023 • SeungHwan An, Kyungwoo Song, Jong-June Jeon
We present a new supervised learning technique for the Variational AutoEncoder (VAE) that allows it to learn a causally disentangled representation and generate causally disentangled outcomes simultaneously.
1 code implementation • NeurIPS 2023 • SeungHwan An, Jong-June Jeon
The Gaussianity assumption has been consistently criticized as a main limitation of the Variational Autoencoder (VAE) despite its efficiency in computational modeling.
1 code implementation • NeurIPS 2023 • Changdae Oh, Junhyuk So, Hoyoon Byun, Yongtaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song
Such a lack of alignment and uniformity might restrict the transferability and robustness of embeddings.
1 code implementation • 23 May 2021 • SeungHwan An, Hosik Choi, Jong-June Jeon
To improve the performance of our VAE in a classification task without the loss of performance as a generative model, we employ a new semi-supervised classification method called SCI (Soft-label Consistency Interpolation).
no code implementations • 21 Dec 2018 • Jong-June Jeon, Yongdai Kim, Sungho Won, Hosik Choi
To reflect these characteristics, a specific regularized regression model with linear constraints is commonly used.