Search Results for author: Jianyu Fan

Found 5 papers, 2 papers with code

Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals

1 code implementation10 Feb 2023 Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Ghassan Hamarneh

The EUCA study findings, the identified explanation forms and goals for technical specification, and the EUCA study dataset support the design and evaluation of end-user-centered XAI techniques for accessible, safe, and accountable AI.

Autonomous Driving Explainable artificial intelligence +1

Transcending XAI Algorithm Boundaries through End-User-Inspired Design

no code implementations18 Aug 2022 Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Xiaoxiao Li, Ghassan Hamarneh

The boundaries of existing explainable artificial intelligence (XAI) algorithms are confined to problems grounded in technical users' demand for explainability.

Autonomous Driving counterfactual +3

EUCA: the End-User-Centered Explainable AI Framework

1 code implementation4 Feb 2021 Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Ghassan Hamarneh

The ability to explain decisions to end-users is a necessity to deploy AI as critical decision support.

Decision Making Explainable artificial intelligence Human-Computer Interaction

A Comparative Study of Western and Chinese Classical Music based on Soundscape Models

no code implementations20 Feb 2020 Jianyu Fan, Yi-Hsuan Yang, Kui Dong, Philippe Pasquier

In this study, we examine whether we can analyze and compare Western and Chinese classical music based on soundscape models.

Emotion Recognition Event Detection +3

Multi-label Sound Event Retrieval Using a Deep Learning-based Siamese Structure with a Pairwise Presence Matrix

no code implementations20 Feb 2020 Jianyu Fan, Eric Nichols, Daniel Tompkins, Ana Elisa Mendez Mendez, Benjamin Elizalde, Philippe Pasquier

State of the art sound event retrieval models have focused on single-label audio recordings, with only one sound event occurring, rather than on multi-label audio recordings (i. e., multiple sound events occur in one recording).

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.