no code implementations • 2 Sep 2020 • Jeremy E. Block, Eric D. Ragan
We discuss the evaluation of users' mental models of system logic.
no code implementations • 28 Aug 2020 • Donald R. Honeycutt, Mahsan Nourani, Eric D. Ragan
Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy.
no code implementations • 20 Aug 2020 • Mahsan Nourani, Joanie T. King, Eric D. Ragan
While it is also known that user trust can be influenced by first impressions with intelligent systems, our research explores the relationship between ordering bias and domain expertise when encountering errors in intelligent systems.
no code implementations • 5 May 2020 • Mahsan Nourani, Chiradeep Roy, Tahrima Rahman, Eric D. Ragan, Nicholas Ruozzi, Vibhav Gogate
The explanations generated by these simplified models, however, might not accurately justify and be truthful to the model.
no code implementations • 8 Jul 2019 • Fan Yang, Shiva K. Pentyala, Sina Mohseni, Mengnan Du, Hao Yuan, Rhema Linder, Eric D. Ragan, Shuiwang Ji, Xia Hu
In this demo paper, we present the XFake system, an explainable fake news detector that assists end-users to identify news credibility.
1 code implementation • 28 Nov 2018 • Sina Mohseni, Niloofar Zarei, Eric D. Ragan
The need for interpretable and accountable intelligent system gets sensible as artificial intelligence plays more role in human life.
Human-Computer Interaction
1 code implementation • 16 Jan 2018 • Sina Mohseni, Jeremy E. Block, Eric D. Ragan
We demonstrate our benchmark's utility for quantitative evaluation of model explanations by comparing it with human subjective ratings and ground-truth single-layer segmentation masks evaluations.