no code implementations • COLING 2022 • Youhan Lee, Kyungtae Lim, Woonhyuk Baek, Byungseok Roh, Saehoon Kim
In this multilingual approach, a typical setup is to use pairs of (image and English-text) and translation pairs.
1 code implementation • 30 Oct 2024 • Semin Kim, Jaehoon Yoo, Jinwoo Kim, Yeonwoo Cha, Saehoon Kim, Seunghoon Hong
In this work, we investigate a method for simulation-free training of Neural Ordinary Differential Equations (NODEs) for learning deterministic mappings between paired data.
no code implementations • CVPR 2023 • Minsoo Kang, Doyup Lee, Jiseob Kim, Saehoon Kim, Bohyung Han
We propose a text-to-image generation algorithm based on deep neural networks when text captions for images are unavailable during training.
1 code implementation • CVPR 2023 • Chiheon Kim, Doyup Lee, Saehoon Kim, Minsu Cho, Wook-Shin Han
Despite recent advances in implicit neural representations (INRs), it remains challenging for a coordinate-based multi-layer perceptron (MLP) of INRs to learn a common representation across data instances and generalize it for unseen instances.
no code implementations • 9 Jun 2022 • Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, Wook-Shin Han
After code stacks in the sequence are randomly masked, Contextual RQ-Transformer is trained to infill the masked code stacks based on the unmasked contexts of the image.
Ranked #1 on
Text-to-Image Generation
on Conceptual Captions
4 code implementations • CVPR 2022 • Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, Wook-Shin Han
However, we postulate that previous VQ cannot shorten the code sequence and generate high-fidelity images together in terms of the rate-distortion trade-off.
Ranked #2 on
Text-to-Image Generation
on Conceptual Captions
1 code implementation • ICLR 2022 • Byungseok Roh, Jaewoong Shin, Wuhyun Shin, Saehoon Kim
Deformable DETR uses the multiscale feature to ameliorate performance, however, the number of encoder tokens increases by 20x compared to DETR, and the computation cost of the encoder attention remains a bottleneck.
1 code implementation • ICML Workshop AutoML 2021 • Chiheon Kim, Saehoon Kim, Jongmin Kim, Donghoon Lee, Sungwoong Kim
Large-batch training has been essential in leveraging large-scale datasets and models in deep learning.
1 code implementation • 11 Jun 2021 • Saehoon Kim, Sungwoong Kim, Juho Lee
On the other hand, the generative pre-training directly estimates the data distribution, so the representations tend to be robust but not optimal for discriminative tasks.
1 code implementation • ICLR 2020 • Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang
While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed.
no code implementations • 23 May 2019 • Jungtaek Kim, Michael McCourt, Tackgeun You, Saehoon Kim, Seungjin Choi
We propose a practical Bayesian optimization method over sets, to minimize a black-box function that takes a set as a single input.
no code implementations • 11 Apr 2019 • Minseop Park, Jungtaek Kim, Saehoon Kim, Yanbin Liu, Seungjin Choi
A meta-model is trained on a distribution of similar tasks such that it learns an algorithm that can quickly adapt to a novel task with only a handful of labeled examples.
1 code implementation • ICLR 2020 • Jaehong Yoon, Saehoon Kim, Eunho Yang, Sung Ju Hwang
First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks.
no code implementations • 27 Sep 2018 • Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang
With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.
2 code implementations • 5 Jun 2018 • Ingyo Chung, Saehoon Kim, Juho Lee, Kwang Joon Kim, Sung Ju Hwang, Eunho Yang
We present a personalized and reliable prediction model for healthcare, which can provide individually tailored medical services such as diagnosis, disease treatment, and prevention.
1 code implementation • 28 May 2018 • Juho Lee, Saehoon Kim, Jaehong Yoon, Hae Beom Lee, Eunho Yang, Sung Ju Hwang
With such input-independent dropout, each neuron is evolved to be generic across inputs, which makes it difficult to sparsify networks without accuracy loss.
2 code implementations • ICLR 2019 • Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang
The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class.
2 code implementations • NeurIPS 2018 • Jay Heo, Hae Beom Lee, Saehoon Kim, Juho Lee, Kwang Joon Kim, Eunho Yang, Sung Ju Hwang
Attention mechanism is effective in both focusing the deep learning models on relevant features and interpreting them.
4 code implementations • NeurIPS 2018 • Hae Beom Lee, Juho Lee, Saehoon Kim, Eunho Yang, Sung Ju Hwang
Moreover, the learning of dropout rates for non-target classes on each instance allows the classifier to focus more on classification against the most confusing classes.
no code implementations • 17 Oct 2017 • Jungtaek Kim, Saehoon Kim, Seungjin Choi
A simple alternative of manual search is random/grid search on a space of hyperparameters, which still undergoes extensive evaluations of validation errors in order to find its best configuration.
no code implementations • CVPR 2015 • Saehoon Kim, Seungjin Choi
In this paper we analyze a bilinear random projection method where feature matrices are transformed to binary codes by two smaller random projection matrices.