Explanation of Face Recognition via Saliency Maps

12 Apr 2023  ·  Yuhang Lu, Touradj Ebrahimi ·

Despite the significant progress in face recognition in the past years, they are often treated as "black boxes" and have been criticized for lacking explainability. It becomes increasingly important to understand the characteristics and decisions of deep face recognition systems to make them more acceptable to the public. Explainable face recognition (XFR) refers to the problem of interpreting why the recognition model matches a probe face with one identity over others. Recent studies have explored use of visual saliency maps as an explanation, but they often lack a deeper analysis in the context of face recognition. This paper starts by proposing a rigorous definition of explainable face recognition (XFR) which focuses on the decision-making process of the deep recognition model. Following the new definition, a similarity-based RISE algorithm (S-RISE) is then introduced to produce high-quality visual saliency maps. Furthermore, an evaluation approach is proposed to systematically validate the reliability and accuracy of general visual saliency-based XFR methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here