Rule induction for global explanation of trained models

Understanding the behavior of a trained network and finding explanations for its outputs is important for improving the network's performance and generalization ability, and for ensuring trust in automated systems. Several approaches have previously been proposed to identify and visualize the most important features by analyzing a trained network... (read more)

Results in Papers With Code
(↓ scroll down to see all results)