Search Results for author: Jongseong Jang

Found 9 papers, 4 papers with code

TransCAM: Transformer Attention-based CAM Refinement for Weakly Supervised Semantic Segmentation

1 code implementation14 Mar 2022 Ruiwen Li, Zheda Mai, Chiheb Trabelsi, Zhibo Zhang, Jongseong Jang, Scott Sanner

In this paper, we propose TransCAM, a Conformer-based solution to WSSS that explicitly leverages the attention weights from the transformer branch of the Conformer to refine the CAM generated from the CNN branch.

Weakly-Supervised Semantic Segmentation

ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification

1 code implementation28 Nov 2021 Zhibo Zhang, Jongseong Jang, Chiheb Trabelsi, Ruiwen Li, Scott Sanner, Yeonjeong Jeong, Dongsub Shim

Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification.

Adversarial Robustness Classification +2

Look at here : Utilizing supervision to attend subtle key regions

no code implementations25 Nov 2021 ChangHwan Lee, Yeesuk Kim, Bong Gun Lee, Doosup Kim, Jongseong Jang

In addition, using the class activation map, we identified that the Cut\&Remain methods drive a model to focus on relevant subtle and small regions efficiently.

Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks

no code implementations15 Feb 2021 Mahesh Sudhakar, Sam Sattarzadeh, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim

Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models.

Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring

1 code implementation15 Feb 2021 Sam Sattarzadeh, Mahesh Sudhakar, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim

However, the average gradient-based terms deployed in this method underestimates the contribution of the representations discovered by the model to its predictions.

Object Localization

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

no code implementations1 Oct 2020 Sam Sattarzadeh, Mahesh Sudhakar, Anthony Lem, Shervin Mehryar, K. N. Plataniotis, Jongseong Jang, Hyunwoo Kim, Yeonjeong Jeong, Sangmin Lee, Kyunghoon Bae

In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation.

Online Class-Incremental Continual Learning with Adversarial Shapley Value

1 code implementation31 Aug 2020 Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, Jongseong Jang

As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption.

Continual Learning

Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation

no code implementations19 Sep 2019 Jihyeun Yoon, Kyungyul Kim, Jongseong Jang

Deep Neural Network based classifiers are known to be vulnerable to perturbations of inputs constructed by an adversarial attack to force misclassification.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.