1 code implementation • CVPR 2022 • Kyungjune Baek, Hyunjung Shim
Since our synthesizer only considers the generic properties of natural images, the single model pretrained on our dataset can be consistently transferred to various target datasets, and even outperforms the previous methods pretrained with the natural images in terms of Fr'echet inception distance.
1 code implementation • CVPR 2022 • Minhyun Lee, Dongseob Kim, Hyunjung Shim
Existing WSSS methods commonly argue that the sparse coverage of CAM incurs the performance bottleneck of WSSS.
Weakly supervised segmentation
Weakly supervised Semantic Segmentation
+1
1 code implementation • 4 Jan 2022 • Minjin Choi, jinhong Kim, Joonsek Lee, Hyunjung Shim, Jongwuk Lee
Session-based recommendation (SR) predicts the next items from a sequence of previous items consumed by an anonymous user.
2 code implementations • 22 Dec 2021 • Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
Existing methods learn to disentangle style and content elements by developing a universal style representation for each font style.
1 code implementation • CVPR 2021 • Seungho Lee, Minhyun Lee, Jongwuk Lee, Hyunjung Shim
Existing studies in weakly-supervised semantic segmentation (WSSS) using image-level weak supervision have several limitations: sparse object coverage, inaccurate object boundaries, and co-occurring pixels from non-target objects.
Ranked #9 on
Weakly-Supervised Semantic Segmentation
on PASCAL VOC 2012 test
(using extra training data)
Saliency Detection
Weakly supervised Semantic Segmentation
+1
4 code implementations • ICCV 2021 • Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
MX-Font extracts multiple style features not explicitly conditioned on component labels, but automatically by multiple experts to represent different local concepts, e. g., left-side sub-glyph.
3 code implementations • 30 Mar 2021 • Minjin Choi, jinhong Kim, Joonseok Lee, Hyunjung Shim, Jongwuk Lee
Session-based recommendation aims at predicting the next item given a sequence of previous items consumed in the session, e. g., on e-commerce or multimedia streaming services.
no code implementations • 1 Jan 2021 • Daejin Kim, Hyunjung Shim, Jongwuk Lee
We demonstrate that AAP equipped with existing pruning methods (i. e., iterative pruning, one-shot pruning, and dynamic pruning) consistently improves the accuracy of original methods at 128× - 4096× compression ratios on three benchmark datasets.
1 code implementation • Pattern Recognition 2021 • Kyungjune Baek, Duhyeon Bang, Hyunjung Shim
Recently developed regularization techniques improve the networks generalization by only considering the global context.
no code implementations • 1 Jan 2021 • Duhyeon Bang, Yunho Jeon, Jin-Hwa Kim, Jiwon Kim, Hyunjung Shim
When a person identifies objects, he or she can think by associating objects to many classes and conclude by taking inter-class relations into account.
3 code implementations • 23 Sep 2020 • Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
However, learning component-wise styles solely from reference glyphs is infeasible in the few-shot font generation scenario, when a target script has a large number of components, e. g., over 200 for Chinese.
2 code implementations • 8 Jul 2020 • Junsuk Choe, Seong Joon Oh, Sanghyuk Chun, Seungho Lee, Zeynep Akata, Hyunjung Shim
In this paper, we argue that WSOL task is ill-posed with only image-level labels, and propose a new evaluation protocol where full supervision is limited to only a small held-out set not overlapping with the test set.
1 code implementation • ICCV 2021 • Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, Hyunjung Shim
To this end, we propose a truly unsupervised image-to-image translation model (TUNIT) that simultaneously learns to separate image domains and translates input images into the estimated domains.
2 code implementations • CVPR 2020 • Junsuk Choe, Seong Joon Oh, Seungho Lee, Sanghyuk Chun, Zeynep Akata, Hyunjung Shim
In this paper, we argue that WSOL task is ill-posed with only image-level labels, and propose a new evaluation protocol where full supervision is limited to only a small held-out set not overlapping with the test set.
no code implementations • 13 Nov 2019 • Jae-woong Lee, Minjin Choi, Jongwuk Lee, Hyunjung Shim
Knowledge distillation (KD) is a well-known method to reduce inference latency by compressing a cumbersome teacher model to a small student model.
1 code implementation • CVPR 2019 • Junsuk Choe, Hyunjung Shim
Weakly Supervised Object Localization (WSOL) techniques learn the object location only using image-level labels, without location annotations.
no code implementations • 28 Sep 2018 • Seongjong Song, Hyunjung Shim
We propose a novel approach to recovering the translucent objects from a single time-of-flight (ToF) depth camera using deep residual networks.
no code implementations • 27 Sep 2018 • Duhyeon Bang, Hyunjung Shim
In order to analyze the real data in the latent space of GANs, it is necessary to investigate the inverse generation mapping from the data to the latent vector.
no code implementations • 20 Jul 2018 • Kyungjune Baek, Duhyeon Bang, Hyunjung Shim
Also, we show that our model can achieve the competitive performance with the state-of-the-art attribute editing technique in terms of attribute editing quality.
no code implementations • 3 Jul 2018 • Duhyeon Bang, Hyunjung Shim
We propose a novel algorithm, namely Resembled Generative Adversarial Networks (GAN), that generates two different domain data simultaneously where they resemble each other.
no code implementations • 1 Jun 2018 • Junsuk Choe, Joo Hyun Park, Hyunjung Shim
Our important finding is that high image diversity of GAN, which is a main goal in GAN research, is ironically disadvantageous for object localization, because such discriminators focus not only on the target object, but also on the various objects, such as background objects.
no code implementations • 28 May 2018 • Duhyeon Bang, Seoungyoon Kang, Hyunjung Shim
Various studies assert that the latent space of a GAN is semanticallymeaningful and can be utilized for advanced data analysis and manipulation.
1 code implementation • 12 Apr 2018 • Duhyeon Bang, Hyunjung Shim
Mode collapse is a critical problem in training generative adversarial networks.
no code implementations • 22 Feb 2018 • Junsuk Choe, Joo Hyun Park, Hyunjung Shim
To this end, we employ an effective data augmentation for improving the accuracy of the object localization.
no code implementations • ICML 2018 • Duhyeon Bang, Hyunjung Shim
Because the AE learns to minimize forward KL divergence, our GAN training with representative features is influenced by both reverse and forward KL divergence.