no code implementations • 17 Oct 2023 • Shiyuan Huang, Siddarth Mamidanna, Shreedhar Jangam, Yilun Zhou, Leilani H. Gilpin
Through an extensive set of experiments, we find that ChatGPT's self-explanations perform on par with traditional ones, but are quite different from them according to various agreement metrics, meanwhile being much cheaper to produce (as they are generated along with the prediction).
1 code implementation • 16 Oct 2023 • Sharan Subramanian, Leilani H. Gilpin
Our contribution is an interpretable model with similar accuracy to more complex models.
no code implementations • 22 Jun 2023 • Adam Amos-Binks, Dustin Dannenhauer, Leilani H. Gilpin
StarCraft and Go are closed-world domains whose risks are known and mitigations well documented, ideal for learning through repetition.
no code implementations • 27 Jun 2022 • Leilani H. Gilpin, Andrew R. Paley, Mohammed A. Alam, Sarah Spurlock, Kristian J. Hammond
There is broad agreement that Artificial Intelligence (AI) systems, particularly those using Machine Learning (ML), should be able to "explain" their behavior.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 19 Jan 2019 • Leilani H. Gilpin, Cecilia Testart, Nathaniel Fruchter, Julius Adebayo
We explore the types of questions that explanatory DNN systems can answer and discuss challenges in building explanatory systems that provide outside explanations for societal requirements and benefit.
1 code implementation • 31 May 2018 • Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, Lalana Kagal
There has recently been a surge of work in explanatory artificial intelligence (XAI).
BIG-bench Machine Learning Explainable Artificial Intelligence (XAI) +1