no code implementations • 1 Mar 2024 • Andrés Páez
In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term "explanation" in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models.
no code implementations • 30 Oct 2023 • Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith, Simone Stumpf
As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 19 Jun 2020 • Andrés Páez
Moores Paradox is a test case for any formal theory of belief.
no code implementations • 2 Mar 2020 • Andrés Páez
Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency?
no code implementations • 22 Feb 2020 • Andrés Páez
In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI.
BIG-bench Machine Learning Explainable Artificial Intelligence (XAI) +1