A Human-in-the-Loop Approach based on Explainability to Improve NTL Detection

28 Sep 2020  ·  Bernat Coma-Puig, Josep Carmona ·

Implementing systems based on Machine Learning to detect fraud and other Non-Technical Losses (NTL) is challenging: the data available is biased, and the algorithms currently used are black-boxes that cannot be either easily trusted or understood by stakeholders. This work explains our human-in-the-loop approach to mitigate these problems in a real system that uses a supervised model to detect Non-Technical Losses (NTL) for an international utility company from Spain. This approach exploits human knowledge (e.g. from the data scientists or the company's stakeholders) and the information provided by explanatory methods to guide the system during the training process. This simple, efficient method that can be easily implemented in other industrial projects is tested in a real dataset and the results show that the derived prediction model is better in terms of accuracy, interpretability, robustness and flexibility.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here