Search Results for author: Sungwoong Kim

Found 23 papers, 14 papers with code

MAGVLT: Masked Generative Vision-and-Language Transformer

1 code implementation CVPR 2023 Sungwoong Kim, DaeJin Jo, Donghoon Lee, Jongmin Kim

Particularly, MAGVLT achieves competitive results on both zero-shot image-to-text and text-to-image generation tasks from MS-COCO by one moderate-sized model (fewer than 500M parameters) even without the use of monomodal data and networks.

Image Captioning Text Infilling

LECO: Learnable Episodic Count for Task-Specific Intrinsic Reward

1 code implementation11 Oct 2022 DaeJin Jo, Sungwoong Kim, Daniel Wontae Nam, Taehwan Kwon, Seungeun Rho, Jongmin Kim, Donghoon Lee

In order to resolve these issues, in this paper, we propose a learnable hash-based episodic count, which we name LECO, that efficiently performs as a task-specific intrinsic reward in hard exploration problems.

Efficient Exploration reinforcement-learning

Selective Token Generation for Few-shot Natural Language Generation

1 code implementation COLING 2022 DaeJin Jo, Taehwan Kwon, Eun-Sol Kim, Sungwoong Kim

Natural language modeling with limited training data is a challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability.

Data-to-Text Generation Language Modelling +3

Contrastive Regularization for Semi-Supervised Learning

no code implementations17 Jan 2022 Doyup Lee, Sungwoong Kim, Ildoo Kim, Yeongjae Cheon, Minsu Cho, Wook-Shin Han

Consistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance.

Semi-Supervised Image Classification

Selective Token Generation for Few-shot Language Modeling

no code implementations29 Sep 2021 DaeJin Jo, Taehwan Kwon, Sungwoong Kim, Eun-Sol Kim

Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) for few-shot natural language generation (NLG) tasks.

Data-to-Text Generation Language Modelling +3

Hybrid Generative-Contrastive Representation Learning

1 code implementation11 Jun 2021 Saehoon Kim, Sungwoong Kim, Juho Lee

On the other hand, the generative pre-training directly estimates the data distribution, so the representations tend to be robust but not optimal for discriminative tasks.

Contrastive Learning Representation Learning

Spatially Consistent Representation Learning

2 code implementations CVPR 2021 Byungseok Roh, Wuhyun Shin, Ildoo Kim, Sungwoong Kim

While these contrastive methods mainly focus on generating invariant global representations at the image-level under semantic-preserving transformations, they are prone to overlook spatial consistency of local representations and therefore have a limitation in pretraining for localization tasks such as object detection and instance segmentation.

Contrastive Learning Image Classification +6

Visual Concept Reasoning Networks

no code implementations26 Aug 2020 Taesup Kim, Sungwoong Kim, Yoshua Bengio

It approximates sparsely connected networks by explicitly defining multiple branches to simultaneously learn representations with different visual concepts or properties.

Action Recognition Image Classification +4

torchgpipe: On-the-fly Pipeline Parallelism for Training Giant Models

3 code implementations21 Apr 2020 Chiheon Kim, Heungsub Lee, Myungryong Jeong, Woonhyuk Baek, Boogeon Yoon, Ildoo Kim, Sungbin Lim, Sungwoong Kim

We design and implement a ready-to-use library in PyTorch for performing micro-batch pipeline parallelism with checkpointing proposed by GPipe (Huang et al., 2019).

Spatially Attentive Output Layer for Image Classification

no code implementations CVPR 2020 Ildoo Kim, Woonhyuk Baek, Sungwoong Kim

In this paper, we propose a novel spatial output layer on top of the existing convolutional feature maps to explicitly exploit the location-specific output information.

Classification General Classification +1

Mining GOLD Samples for Conditional GANs

1 code implementation NeurIPS 2019 Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin

Conditional generative adversarial networks (cGANs) have gained a considerable attention in recent years due to its class-wise controllability and superior quality for complex generation tasks.

Active Learning

Scalable Neural Architecture Search for 3D Medical Image Segmentation

no code implementations13 Jun 2019 Sungwoong Kim, Ildoo Kim, Sungbin Lim, Woonhyuk Baek, Chiheon Kim, Hyungjoo Cho, Boogeon Yoon, Taesup Kim

In this paper, a neural architecture search (NAS) framework is proposed for 3D medical image segmentation, to automatically optimize a neural architecture from a large design space.

Image Segmentation Medical Image Segmentation +2

Edge-labeling Graph Neural Network for Few-shot Learning

4 code implementations CVPR 2019 Jongmin Kim, Taesup Kim, Sungwoong Kim, Chang D. Yoo

In this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning.

Clustering Few-Shot Image Classification +1

Fast AutoAugment

11 code implementations NeurIPS 2019 Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, Sungwoong Kim

Data augmentation is an essential technique for improving generalization ability of deep learning models.

Image Augmentation Image Classification

The Liver Tumor Segmentation Benchmark (LiTS)

6 code implementations13 Jan 2019 Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, Fabian Lohöfer, Julian Walter Holch, Wieland Sommer, Felix Hofmann, Alexandre Hostettler, Naama Lev-Cohain, Michal Drozdzal, Michal Marianne Amitai, Refael Vivantik, Jacob Sosna, Ivan Ezhov, Anjany Sekuboyina, Fernando Navarro, Florian Kofler, Johannes C. Paetzold, Suprosanna Shit, Xiaobin Hu, Jana Lipková, Markus Rempfler, Marie Piraud, Jan Kirschke, Benedikt Wiestler, Zhiheng Zhang, Christian Hülsemeyer, Marcel Beetz, Florian Ettlinger, Michela Antonelli, Woong Bae, Míriam Bellver, Lei Bi, Hao Chen, Grzegorz Chlebus, Erik B. Dam, Qi Dou, Chi-Wing Fu, Bogdan Georgescu, Xavier Giró-i-Nieto, Felix Gruen, Xu Han, Pheng-Ann Heng, Jürgen Hesser, Jan Hendrik Moltz, Christian Igel, Fabian Isensee, Paul Jäger, Fucang Jia, Krishna Chaitanya Kaluva, Mahendra Khened, Ildoo Kim, Jae-Hun Kim, Sungwoong Kim, Simon Kohl, Tomasz Konopczynski, Avinash Kori, Ganapathy Krishnamurthi, Fan Li, Hongchao Li, Junbo Li, Xiaomeng Li, John Lowengrub, Jun Ma, Klaus Maier-Hein, Kevis-Kokitsi Maninis, Hans Meine, Dorit Merhof, Akshay Pai, Mathias Perslev, Jens Petersen, Jordi Pont-Tuset, Jin Qi, Xiaojuan Qi, Oliver Rippel, Karsten Roth, Ignacio Sarasua, Andrea Schenk, Zengming Shen, Jordi Torres, Christian Wachinger, Chunliang Wang, Leon Weninger, Jianrong Wu, Daguang Xu, Xiaoping Yang, Simon Chun-Ho Yu, Yading Yuan, Miao Yu, Liping Zhang, Jorge Cardoso, Spyridon Bakas, Rickmer Braren, Volker Heinemann, Christopher Pal, An Tang, Samuel Kadoury, Luc Soler, Bram van Ginneken, Hayit Greenspan, Leo Joskowicz, Bjoern Menze

In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018.

Benchmarking Computed Tomography (CT) +2

Bayesian Model-Agnostic Meta-Learning

2 code implementations NeurIPS 2018 Taesup Kim, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, Sungjin Ahn

Learning to infer Bayesian posterior from a few-shot dataset is an important step towards robust meta-learning due to the model uncertainty inherent in the problem.

Active Learning Image Classification +2

A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems

no code implementations2 Apr 2014 Jörg H. Kappes, Bjoern Andres, Fred A. Hamprecht, Christoph Schnörr, Sebastian Nowozin, Dhruv Batra, Sungwoong Kim, Bernhard X. Kausler, Thorben Kröger, Jan Lellmann, Nikos Komodakis, Bogdan Savchynskyy, Carsten Rother

However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types.

Cannot find the paper you are looking for? You can Submit a new open access paper.