Search Results for author: Sung-Ho Bae

Found 27 papers, 8 papers with code

MST-compression: Compressing and Accelerating Binary Neural Networks with Minimum Spanning Tree

no code implementations ICCV 2023 Quang Hieu Vo, Linh-Tam Tran, Sung-Ho Bae, Lok-Won Kim, Choong Seon Hong

Binary neural networks (BNNs) have been widely adopted to reduce the computational cost and memory storage on edge-computing devices by using one-bit representation for activations and weights.


Faster Segment Anything: Towards Lightweight SAM for Mobile Applications

2 code implementations25 Jun 2023 Chaoning Zhang, Dongshen Han, Yu Qiao, Jung Uk Kim, Sung-Ho Bae, Seungkyu Lee, Choong Seon Hong

Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM.

Image Segmentation Instance Segmentation +1

Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era

no code implementations10 May 2023 Chenghao Li, Chaoning Zhang, Atish Waghwase, Lik-Hang Lee, Francois Rameau, Yang Yang, Sung-Ho Bae, Choong Seon Hong

AI generated content) has made remarkable progress in the past few years, among which text-guided content generation is the most practical one since it enables the interaction between human instruction and AIGC.

Scene Generation Text to 3D +1

Attack-SAM: Towards Attacking Segment Anything Model With Adversarial Examples

no code implementations1 May 2023 Chenshuang Zhang, Chaoning Zhang, Taegoo Kang, Donghun Kim, Sung-Ho Bae, In So Kweon

Beyond the basic goal of mask removal, we further investigate and find that it is possible to generate any desired mask by the adversarial attack.

Adversarial Attack Adversarial Robustness

A Survey on Graph Diffusion Models: Generative AI in Science for Molecule, Protein and Material

no code implementations4 Apr 2023 Mengchun Zhang, Maryam Qamar, Taegoo Kang, Yuna Jung, Chenshuang Zhang, Sung-Ho Bae, Chaoning Zhang

Diffusion models have become a new SOTA generative modeling method in various fields, for which there are multiple survey works that provide an overall survey.

Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability

no code implementations26 Mar 2023 Soyoun Won, Sung-Ho Bae, Seong Tae Kim

Data augmentation strategies are actively used when training deep neural networks (DNNs).

Data Augmentation

A Survey on Audio Diffusion Models: Text To Speech Synthesis and Enhancement in Generative AI

no code implementations23 Mar 2023 Chenshuang Zhang, Chaoning Zhang, Sheng Zheng, Mengchun Zhang, Maryam Qamar, Sung-Ho Bae, In So Kweon

This work conducts a survey on audio diffusion model, which is complementary to existing surveys that either lack the recent progress of diffusion-based speech synthesis or highlight an overall picture of applying diffusion model in multiple fields.

Speech Enhancement Speech Synthesis +1

ZooD: Exploiting Model Zoo for Out-of-Distribution Generalization

no code implementations17 Oct 2022 Qishi Dong, Awais Muhammad, Fengwei Zhou, Chuanlong Xie, Tianyang Hu, Yongxin Yang, Sung-Ho Bae, Zhenguo Li

We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46. 5\% to 50. 6\%.

feature selection Out-of-Distribution Generalization

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

no code implementations NeurIPS 2021 Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li

First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation.

Transfer Learning

Tr-NAS: Memory-Efficient Neural Architecture Search with Transferred Blocks

no code implementations29 Sep 2021 Linh-Tam Tran, A F M Shahab Uddin, Sung-Ho Bae

To remedy this problem, we propose a new paradigm for NAS that effectively reduces the use of memory while maintaining high performance.

Neural Architecture Search

Adversarial Robustness for Unsupervised Domain Adaptation

no code implementations ICCV 2021 Muhammad Awais, Fengwei Zhou, Hang Xu, Lanqing Hong, Ping Luo, Sung-Ho Bae, Zhenguo Li

Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models.

Adversarial Robustness Unsupervised Domain Adaptation

Distilling Global and Local Logits With Densely Connected Relations

1 code implementation ICCV 2021 Youmin Kim, Jinbae Park, YounHo Jang, Muhammad Ali, Tae-Hyun Oh, Sung-Ho Bae

In prevalent knowledge distillation, logits in most image recognition models are computed by global average pooling, then used to learn to encode the high-level and task-relevant knowledge.

Image Classification Knowledge Distillation +3

Multi-Attention Based Ultra Lightweight Image Super-Resolution

1 code implementation29 Aug 2020 Abdul Muqeet, Jiwon Hwang, Subin Yang, Jung Heum Kang, Yongwoo Kim, Sung-Ho Bae

MAFFSRN consists of proposed feature fusion groups (FFGs) that serve as a feature extraction block.

Image Super-Resolution

Towards an Adversarially Robust Normalization Approach

1 code implementation19 Jun 2020 Muhammad Awais, Fahad Shamshad, Sung-Ho Bae

In this paper, we investigate how BatchNorm causes this vulnerability and proposed new normalization that is robust to adversarial attacks.

SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization

2 code implementations ICLR 2021 A. F. M. Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, Sung-Ho Bae

We argue that such random selection strategies of the patches may not necessarily represent sufficient information about the corresponding object and thereby mixing the labels according to that uninformative patch enables the model to learn unexpected feature representation.

Data Augmentation object-detection +1

Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights

no code implementations25 Sep 2019 Jinbae Park, Sung-Ho Bae

To solve this problem, we propose a hybrid weight representation (HWR) method which produces a network consisting of two types of weights, i. e., ternary weights (TW) and sparse-large weights (SLW).



no code implementations25 Sep 2019 JoonHyun Jeong, Sung-Ho Bae

Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks.

An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis

no code implementations16 Jul 2019 Kang-Ho Lee, JoonHyun Jeong, Sung-Ho Bae

Based on SVWH, we propose a second ILWP and quantization method which quantize the predicted residuals between the weights in adjacent convolution layers.


Hybrid Residual Attention Network for Single Image Super Resolution

1 code implementation11 Jul 2019 Abdul Muqeet, Md Tauhid Bin Iqbal, Sung-Ho Bae

To address these issues, we present a binarized feature fusion (BFF) structure that utilizes the extracted features from residual groups (RG) in an effective way.

Image Super-Resolution

New pointwise convolution in Deep Neural Networks through Extremely Fast and Non Parametric Transforms

no code implementations25 Jun 2019 Joonhyun Jeong, Sung-Ho Bae

Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.