Search Results for author: Sung-Ho Bae

Found 15 papers, 6 papers with code

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

no code implementations NeurIPS 2021 Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li

First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation.

Transfer Learning

Tr-NAS: Memory-Efficient Neural Architecture Search with Transferred Blocks

no code implementations29 Sep 2021 Linh-Tam Tran, A F M Shahab Uddin, Sung-Ho Bae

To remedy this problem, we propose a new paradigm for NAS that effectively reduces the use of memory while maintaining high performance.

Neural Architecture Search

Adversarial Robustness for Unsupervised Domain Adaptation

no code implementations ICCV 2021 Muhammad Awais, Fengwei Zhou, Hang Xu, Lanqing Hong, Ping Luo, Sung-Ho Bae, Zhenguo Li

Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models.

Adversarial Robustness Unsupervised Domain Adaptation

Distilling Global and Local Logits With Densely Connected Relations

no code implementations ICCV 2021 Youmin Kim, Jinbae Park, YounHo Jang, Muhammad Ali, Tae-Hyun Oh, Sung-Ho Bae

In prevalent knowledge distillation, logits in most image recognition models are computed by global average pooling, then used to learn to encode the high-level and task-relevant knowledge.

Knowledge Distillation Object Detection +1

Multi-Attention Based Ultra Lightweight Image Super-Resolution

1 code implementation29 Aug 2020 Abdul Muqeet, Jiwon Hwang, Subin Yang, Jung Heum Kang, Yongwoo Kim, Sung-Ho Bae

MAFFSRN consists of proposed feature fusion groups (FFGs) that serve as a feature extraction block.

Image Super-Resolution

Towards an Adversarially Robust Normalization Approach

1 code implementation19 Jun 2020 Muhammad Awais, Fahad Shamshad, Sung-Ho Bae

In this paper, we investigate how BatchNorm causes this vulnerability and proposed new normalization that is robust to adversarial attacks.

SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization

1 code implementation ICLR 2021 A. F. M. Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, Sung-Ho Bae

We argue that such random selection strategies of the patches may not necessarily represent sufficient information about the corresponding object and thereby mixing the labels according to that uninformative patch enables the model to learn unexpected feature representation.

Data Augmentation Object Detection

Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights

no code implementations25 Sep 2019 Jinbae Park, Sung-Ho Bae

To solve this problem, we propose a hybrid weight representation (HWR) method which produces a network consisting of two types of weights, i. e., ternary weights (TW) and sparse-large weights (SLW).

Quantization

A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS

no code implementations25 Sep 2019 JoonHyun Jeong, Sung-Ho Bae

Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks.

An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis

no code implementations16 Jul 2019 Kang-Ho Lee, JoonHyun Jeong, Sung-Ho Bae

Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers.

Quantization

Hybrid Residual Attention Network for Single Image Super Resolution

1 code implementation11 Jul 2019 Abdul Muqeet, Md Tauhid Bin Iqbal, Sung-Ho Bae

To address these issues, we present a binarized feature fusion (BFF) structure that utilizes the extracted features from residual groups (RG) in an effective way.

Image Super-Resolution

New pointwise convolution in Deep Neural Networks through Extremely Fast and Non Parametric Transforms

no code implementations25 Jun 2019 Joonhyun Jeong, Sung-Ho Bae

Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.