Imposing Hard Logical Constraints on Multi-label Classification Neural Networks

Machine Learning, and in particular deep learning, is becoming increasingly ubiquitous, and it is likely to be applied in almost every aspect of our lives in the next few years. However, the careless application of such methods in the real world can have, and has already had, disastrous consequences. In order to avoid such undesirable and potentially dangerous scenarios, a standard approach is to formally specify the desired behavior of the system and then ensure its compliancy to the specified properties. In this paper, we thus propose to enhance deep learning models by incorporating background knowledge as hard logical constraints. The constraints rule out the models' undesired behaviors and can be exploited to gain better performance. In order to achieve the above, we propose CCN($h$), a novel model for multi-label classification problems with hard constraints expressed as normal logic rules. Given any multi-label classification neural network $h$, CCN($h$) is able to exploit the information expressed by the constraints to: (i) produce predictions that are guaranteed to satisfy the constraints, and (ii) improve the performances. We conduct an extensive experimental analysis showing the superior performance of CCN($h$) when compared to state-of-the-art models in the setting of multi-label classification problems with hard logical constraints.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here