Search Results for author: SeongUk Park

Found 10 papers, 2 papers with code

Paraphrasing Complex Network: Network Compression via Factor Transfer

2 code implementations NeurIPS 2018 Jangho Kim, SeongUk Park, Nojun Kwak

Among the model compression methods, a method called knowledge transfer is to train a student network with a stronger teacher network.

Model Compression Transfer Learning

Sym-parameterized Dynamic Inference for Mixed-Domain Image Translation

1 code implementation ICCV 2019 Simyung Chang, SeongUk Park, John Yang, Nojun Kwak

Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network.

Image-to-Image Translation Translation

Broadcasting Convolutional Network for Visual Relational Reasoning

no code implementations ECCV 2018 Simyung Chang, John Yang, SeongUk Park, Nojun Kwak

In this paper, we propose the Broadcasting Convolutional Network (BCN) that extracts key object features from the global field of an entire input image and recognizes their relationship with local features.

Relation Relational Reasoning +1

Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation

no code implementations27 Nov 2017 YoungJoon Yoo, SeongUk Park, Junyoung Choi, Sangdoo Yun, Nojun Kwak

In addition to this performance enhancement problem, we show that the proposed PGN can be adopted to solve the classical adversarial problem without utilizing the information on the target classifier.

Classification General Classification

FEED: Feature-level Ensemble Effect for knowledge Distillation

no code implementations ICLR 2019 SeongUk Park, Nojun Kwak

This paper proposes a versatile and powerful training algorithm named Feature-level Ensemble Effect for knowledge Distillation(FEED), which is inspired by the work of factor transfer.

Knowledge Distillation Transfer Learning

FEED: Feature-level Ensemble for Knowledge Distillation

no code implementations24 Sep 2019 SeongUk Park, Nojun Kwak

We name this method as parallel FEED, andexperimental results on CIFAR-100 and ImageNet show that our method has clear performance enhancements, without introducing any additional parameters or computations at test time.

Knowledge Distillation

Feature-map-level Online Adversarial Knowledge Distillation

no code implementations ICML 2020 Inseop Chung, SeongUk Park, Jangho Kim, Nojun Kwak

By training a network to fool the corresponding discriminator, it can learn the other network's feature map distribution.

Knowledge Distillation

On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective

no code implementations9 Sep 2020 SeongUk Park, KiYoon Yoo, Nojun Kwak

In this paper, we focus on knowledge distillation and demonstrate that knowledge distillation methods are orthogonal to other efficiency-enhancing methods both analytically and empirically.

Data Augmentation Efficient Neural Network +2

Semantics-Guided Object Removal for Facial Images: with Broad Applicability and Robust Style Preservation

no code implementations29 Sep 2022 Jookyung Song, Yeonjin Chang, SeongUk Park, Nojun Kwak

U-net, a conventional approach for conditional GANs, retains fine details of unmasked regions but the style of the reconstructed image is inconsistent with the rest of the original image and only works robustly when the size of the occluding object is small enough.

Image Inpainting

Cannot find the paper you are looking for? You can Submit a new open access paper.