1 code implementation • 19 Sep 2022 • Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci, Adrian Weller, Pietro Lio, Mateja Jamnik
Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy.
To the best of our knowledge, this work is the first to propose a general-purpose end-to-end framework integrating probabilistic logic programming into a deep generative model.
Neural-symbolic and statistical relational artificial intelligence both integrate frameworks for learning with logical reasoning.
Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard.
Unlike flat architectures like Knowledge Graph Embedders, which can only represent relations between entities, R2Ns define an additional computational structure, accounting for higher-level relations among the ground atoms.
In our formal setting, we consider a Markov decision process (MDP) that models the dynamics of the environment in which the agent evolves and a Mealy machine synchronized with this MDP to formalize the non-Markovian reward function.
The popularity of deep learning techniques renewed the interest in neural architectures able to process complex structures that can be represented using graphs, inspired by Graph Neural Networks (GNNs).
Neuro-symbolic and statistical relational artificial intelligence both integrate frameworks for learning with logical reasoning.
GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism of the states among neighbor nodes, to implement an iterative procedure to compute the fixed point of the (learnable) state transition function.
We consider a scenario where an artificial agent is reading a stream of text composed of a set of narrations, and it is informed about the identity of some of the individuals that are mentioned in the text portion that is currently being read.
Neural-symbolic approaches have recently gained popularity to inject prior knowledge into a learner without requiring it to induce this knowledge from data.
In the last few years, neural networks have been intensively used to develop meaningful distributed representations of words and contexts around them.
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing.
In spite of the amazing results obtained by deep learning in many applications, a real intelligent behavior of an agent acting in a complex environment is likely to require some kind of higher-level symbolic inference.
Deep learning is very effective at jointly learning feature representations and classification models, especially when dealing with high dimensional input patterns.
This might open the doors to a truly novel class of learning algorithms where, because of the introduction of the notion of support neurons, the optimization scheme also plays a fundamental role in the construction of the architecture.
The effectiveness of deep neural architectures has been widely supported in terms of both experimental and foundational principles.
We use deep architectures to model the involved variables, and propose a computational scheme where the learning process carries out a satisfaction of the constraints.