1 code implementation • 28 Nov 2021 • Zhibo Zhang, Jongseong Jang, Chiheb Trabelsi, Ruiwen Li, Scott Sanner, Yeonjeong Jeong, Dongsub Shim
Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification.
no code implementations • 29 May 2021 • Ruiwen Li, Zhibo Zhang, Jiani Li, Chiheb Trabelsi, Scott Sanner, Jongseong Jang, Yeonjeong Jeong, Dongsub Shim
Recent years have seen the introduction of a range of methods for post-hoc explainability of image classifier predictions.
1 code implementation • 15 Feb 2021 • Sam Sattarzadeh, Mahesh Sudhakar, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
However, the average gradient-based terms deployed in this method underestimates the contribution of the representations discovered by the model to its predictions.
no code implementations • 15 Feb 2021 • Mahesh Sudhakar, Sam Sattarzadeh, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models.
Computational Efficiency Explainable Artificial Intelligence (XAI)
no code implementations • 1 Oct 2020 • Sam Sattarzadeh, Mahesh Sudhakar, Anthony Lem, Shervin Mehryar, K. N. Plataniotis, Jongseong Jang, Hyunwoo Kim, Yeonjeong Jeong, Sangmin Lee, Kyunghoon Bae
In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation.