Feature attribution methods are popular for explaining neural network predictions, and they are often evaluated on metrics such as comprehensiveness and sufficiency, which are motivated by the principle that more important features -- as judged by the explanation -- should have larger impacts on model prediction.
Fairness has emerged as an important concern in automated decision-making in recent years, especially when these decisions affect human welfare.
Active learning (AL) algorithms may achieve better performance with fewer data because the model guides the data selection process.
In this paper, we consider the problem of exploring the prediction level sets of a classifier using probabilistic programming.
Multi-agent reinforcement learning (MARL) extends (single-agent) reinforcement learning (RL) by introducing additional agents and (potentially) partial observability of the environment.