2 code implementations • 10 Oct 2023 • Dong Bok Lee, Seanie Lee, Joonho Ko, Kenji Kawaguchi, Juho Lee, Sung Ju Hwang
To achieve this, we also introduce the MSE between representations of the inner model and the self-supervised target model on the original full dataset for outer optimization.
no code implementations • 19 Oct 2022 • Minseon Kim, Hyeonjeong Ha, Dong Bok Lee, Sung Ju Hwang
Despite the success on few-shot learning problems, most meta-learned models only focus on achieving good performance on clean examples and thus easily break down when given adversarially perturbed samples.
1 code implementation • 21 Aug 2022 • Hae Beom Lee, Dong Bok Lee, Sung Ju Hwang
In this paper, we introduce a novel approach for systematically solving dataset condensation problem in an efficient manner by exploiting the regularity in a given dataset.
no code implementations • 29 Sep 2021 • Seul Lee, Dong Bok Lee, Sung Ju Hwang
To validate the ability to explore the chemical space beyond the known molecular distribution, we experiment with MOG to generate molecules of high absolute values of docking score, which is the affinity score based on a physical binding simulation between a target protein and a given molecule.
2 code implementations • 6 Jun 2021 • Dongchan Min, Dong Bok Lee, Eunho Yang, Sung Ju Hwang
In this work, we propose StyleSpeech, a new TTS model which not only synthesizes high-quality speech but also effectively adapts to new speakers.
1 code implementation • ICLR 2021 • Dong Bok Lee, Dongchan Min, Seanie Lee, Sung Ju Hwang
Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors.
1 code implementation • ICLR 2021 • Seanie Lee, Dong Bok Lee, Sung Ju Hwang
In this work, we propose to mitigate the conditional text generation problem by contrasting positive pairs with negative pairs, such that the model is exposed to various valid or incorrect perturbations of the inputs, for improved generalization.
1 code implementation • NeurIPS 2020 • Jinheon Baek, Dong Bok Lee, Sung Ju Hwang
For transductive link prediction, we further propose a stochastic embedding layer to model uncertainty in the link prediction between unseen entities.
1 code implementation • ACL 2020 • Dong Bok Lee, Seanie Lee, Woo Tae Jeong, Donghwan Kim, Sung Ju Hwang
We validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder (Info-HCVAE) on several benchmark datasets by evaluating the performance of the QA model (BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models.
Ranked #1 on Question Generation on Natural Questions