no code implementations • 27 Nov 2017 • YoungJoon Yoo, SeongUk Park, Junyoung Choi, Sangdoo Yun, Nojun Kwak
In addition to this performance enhancement problem, we show that the proposed PGN can be adopted to solve the classical adversarial problem without utilizing the information on the target classifier.
no code implementations • ECCV 2018 • Simyung Chang, John Yang, SeongUk Park, Nojun Kwak
In this paper, we propose the Broadcasting Convolutional Network (BCN) that extracts key object features from the global field of an entire input image and recognizes their relationship with local features.
2 code implementations • NeurIPS 2018 • Jangho Kim, SeongUk Park, Nojun Kwak
Among the model compression methods, a method called knowledge transfer is to train a student network with a stronger teacher network.
1 code implementation • ICCV 2019 • Simyung Chang, SeongUk Park, John Yang, Nojun Kwak
Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network.
no code implementations • 24 Sep 2019 • SeongUk Park, Nojun Kwak
We name this method as parallel FEED, andexperimental results on CIFAR-100 and ImageNet show that our method has clear performance enhancements, without introducing any additional parameters or computations at test time.
no code implementations • ICML 2020 • Inseop Chung, SeongUk Park, Jangho Kim, Nojun Kwak
By training a network to fool the corresponding discriminator, it can learn the other network's feature map distribution.
no code implementations • 9 Sep 2020 • SeongUk Park, KiYoon Yoo, Nojun Kwak
In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically.
no code implementations • 22 Nov 2021 • SeongUk Park, Nojun Kwak
In this paper, we propose a novel feature distillation (FD) method which is suitable for SISR.
no code implementations • 29 Sep 2022 • Jookyung Song, Yeonjin Chang, SeongUk Park, Nojun Kwak
U-net, a conventional approach for conditional GANs, retains fine details of unmasked regions but the style of the reconstructed image is inconsistent with the rest of the original image and only works robustly when the size of the occluding object is small enough.
no code implementations • ICLR 2019 • SeongUk Park, Nojun Kwak
This paper proposes a versatile and powerful training algorithm named Feature-level Ensemble Effect for knowledge Distillation(FEED), which is inspired by the work of factor transfer.