Search Results for author: JoonHyun Jeong

Found 6 papers, 2 papers with code

Dataset Condensation via Efficient Synthetic-Data Parameterization

2 code implementations30 May 2022 Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, JoonHyun Jeong, Jung-Woo Ha, Hyun Oh Song

The great success of machine learning with massive amounts of data comes at a price of huge computation costs and storage for training and tuning.

Dataset Condensation

EResFD: Rediscovery of the Effectiveness of Standard Convolution for Lightweight Face Detection

1 code implementation4 Apr 2022 JoonHyun Jeong, Beomyoung Kim, Joonsang Yu, Youngjoon Yoo

From the extensive experiments, we show that the proposed backbone can replace that of the state-of-the-art face detector with a faster inference speed.

Face Detection

Observations on K-image Expansion of Image-Mixing Augmentation for Classification

no code implementations8 Oct 2021 JoonHyun Jeong, Sungmin Cha, Youngjoon Yoo, Sangdoo Yun, Taesup Moon, Jongwon Choi

Image-mixing augmentations (e. g., Mixup and CutMix), which typically involve mixing two images, have become the de-facto training techniques for image classification.

Adversarial Robustness Classification +1

A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS

no code implementations25 Sep 2019 JoonHyun Jeong, Sung-Ho Bae

Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks.

An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis

no code implementations16 Jul 2019 Kang-Ho Lee, JoonHyun Jeong, Sung-Ho Bae

Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers.

Quantization

New pointwise convolution in Deep Neural Networks through Extremely Fast and Non Parametric Transforms

no code implementations25 Jun 2019 Joonhyun Jeong, Sung-Ho Bae

Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.