Search Results for author: Niloofar Ranjbar

Found 4 papers, 1 papers with code

Explaining Recommendation System Using Counterfactual Textual Explanations

no code implementations14 Mar 2023 Niloofar Ranjbar, Saeedeh Momtazi, MohammadMehdi Homayounpour

One method for producing a more explainable output is using counterfactual reasoning, which involves altering minimal features to generate a counterfactual item that results in changing the output of the system.

counterfactual Counterfactual Reasoning +1

Using Decision Tree as Local Interpretable Model in Autoencoder-based LIME

1 code implementation7 Apr 2022 Niloofar Ranjbar, Reza Safabakhsh

On the other hand, in some tasks such as medical, economic, and self-driving cars, users want the model to be interpretable to decide if they can trust these results or not.

Self-Driving Cars

Cannot find the paper you are looking for? You can Submit a new open access paper.