Search Results for author: Antonio Vergari

Found 35 papers, 17 papers with code

Towards Representation Learning with Tractable Probabilistic Models

no code implementations8 Aug 2016 Antonio Vergari, Nicola Di Mauro, Floriana Esposito

Probabilistic models learned as density estimators can be exploited in representation learning beside being toolboxes used to answer inference queries only.

Representation Learning

Visualizing and Understanding Sum-Product Networks

no code implementations29 Aug 2016 Antonio Vergari, Nicola Di Mauro, Floriana Esposito

Sum-Product Networks (SPNs) are recently introduced deep tractable probabilistic models by which several kinds of inference queries can be answered exactly and in a tractable time.

Representation Learning

Sum-Product Networks for Hybrid Domains

no code implementations9 Oct 2017 Alejandro Molina, Antonio Vergari, Nicola Di Mauro, Sriraam Natarajan, Floriana Esposito, Kristian Kersting

While all kinds of mixed data -from personal data, over panel and scientific data, to public and commercial data- are collected and stored, building probabilistic graphical models for these hybrid domains becomes more difficult.

Probabilistic Deep Learning using Random Sum-Product Networks

no code implementations5 Jun 2018 Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Kristian Kersting, Zoubin Ghahramani

The need for consistent treatment of uncertainty has recently triggered increased interest in probabilistic deep learning methods.

Probabilistic Deep Learning

Automatic Bayesian Density Analysis

no code implementations24 Jul 2018 Antonio Vergari, Alejandro Molina, Robert Peharz, Zoubin Ghahramani, Kristian Kersting, Isabel Valera

Classical approaches for {exploratory data analysis} are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference.

Anomaly Detection Bayesian Inference +1

SPFlow: An Easy and Extensible Library for Deep Probabilistic Learning using Sum-Product Networks

1 code implementation11 Jan 2019 Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Pranav Subramani, Nicola Di Mauro, Pascal Poupart, Kristian Kersting

We introduce SPFlow, an open-source Python library providing a simple interface to inference, learning and manipulation routines for deep and tractable probabilistic models called Sum-Product Networks (SPNs).

From Variational to Deterministic Autoencoders

4 code implementations ICLR 2020 Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Schölkopf

Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models.

Density Estimation

Conditional Sum-Product Networks: Imposing Structure on Deep Probabilistic Architectures

no code implementations21 May 2019 Xiaoting Shao, Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Thomas Liebig, Kristian Kersting

In contrast, deep probabilistic models such as sum-product networks (SPNs) capture joint distributions in a tractable fashion, but still lack the expressive power of intractable models based on deep neural networks.

Image Classification

Hybrid Probabilistic Inference with Logical Constraints: Tractability and Message Passing

no code implementations20 Sep 2019 Zhe Zeng, Fanqi Yan, Paolo Morettin, Antonio Vergari, Guy Van Den Broeck

Weighted model integration (WMI) is a very appealing framework for probabilistic inference: it allows to express the complex dependencies of real-world hybrid scenarios where variables are heterogeneous in nature (both continuous and discrete) via the language of Satisfiability Modulo Theories (SMT); as well as computing probabilistic queries with arbitrarily complex logical constraints.

On Tractable Computation of Expected Predictions

1 code implementation NeurIPS 2019 Pasha Khosravi, YooJung Choi, Yitao Liang, Antonio Vergari, Guy Van Den Broeck

In this paper, we identify a pair of generative and discriminative models that enables tractable computation of expectations, as well as moments of any order, of the latter with respect to the former in case of regression.

Fairness Imputation +1

Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic Constraints via Message Passing

1 code implementation ICML 2020 Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van Den Broeck

Weighted model integration (WMI) is a very appealing framework for probabilistic inference: it allows to express the complex dependencies of real-world problems where variables are both continuous and discrete, via the language of Satisfiability Modulo Theories (SMT), as well as to compute probabilistic queries with complex logical and arithmetic constraints.

Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits

1 code implementation ICML 2020 Robert Peharz, Steven Lang, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Guy Van Den Broeck, Kristian Kersting, Zoubin Ghahramani

Probabilistic circuits (PCs) are a promising avenue for probabilistic modeling, as they permit a wide range of exact and efficient inference routines.

Strudel: Learning Structured-Decomposable Probabilistic Circuits

1 code implementation18 Jul 2020 Meihua Dang, Antonio Vergari, Guy Van Den Broeck

Probabilistic circuits (PCs) represent a probability distribution as a computational graph.

Density Estimation

Imagining Grounded Conceptual Representations from Perceptual Information in Situated Guessing Games

no code implementations COLING 2020 Alessandro Suglia, Antonio Vergari, Ioannis Konstas, Yonatan Bisk, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon

However, as shown by Suglia et al. (2020), existing models fail to learn truly multi-modal representations, relying instead on gold category labels for objects in the scene both at training and inference time.

Object

Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations

no code implementations NeurIPS 2020 Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van Den Broeck

Weighted model integration (WMI) is a framework to perform advanced probabilistic inference on hybrid domains, i. e., on distributions over mixed continuous-discrete random variables and in presence of complex logical and arithmetic constraints.

Tractable Computation of Expected Kernels

1 code implementation21 Feb 2021 Wenzhe Li, Zhe Zeng, Antonio Vergari, Guy Van Den Broeck

Computing the expectation of kernel functions is a ubiquitous task in machine learning, with applications from classical support vector machines to exploiting kernel embeddings of distributions in probabilistic modeling, statistical inference, causal discovery, and deep learning.

Causal Discovery

A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference

1 code implementation NeurIPS 2021 Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van Den Broeck

Circuit representations are becoming the lingua franca to express and reason about tractable generative and discriminative models.

Neural Concept Formation in Knowledge Graphs

1 code implementation AKBC 2021 Agnieszka Dobrowolska, Antonio Vergari, Pasquale Minervini

In this work, we investigate how to learn novel concepts in Knowledge Graphs (KGs) in a principled way, and how to effectively exploit them to produce more accurate neural link prediction models.

Knowledge Graphs Link Prediction +1

Efficient and Reliable Probabilistic Interactive Learning with Structured Outputs

no code implementations17 Feb 2022 Stefano Teso, Antonio Vergari

In this position paper, we study interactive learning for structured output spaces, with a focus on active learning, in which labels are unknown and must be acquired, and on skeptical learning, in which the labels are noisy and may need relabeling.

Active Learning Position

Semantic Probabilistic Layers for Neuro-Symbolic Learning

1 code implementation1 Jun 2022 Kareem Ahmed, Stefano Teso, Kai-Wei Chang, Guy Van Den Broeck, Antonio Vergari

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.

Hierarchical Multi-label Classification Logical Reasoning

ChemAlgebra: Algebraic Reasoning on Chemical Reactions

no code implementations5 Oct 2022 Andrea Valenti, Davide Bacciu, Antonio Vergari

Measuring the robustness of reasoning in machine learning models is challenging as one needs to provide a task that cannot be easily shortcut by exploiting spurious statistical correlations in the data, while operating on complex objects and constraints.

How to Turn Your Knowledge Graph Embeddings into Generative Models

1 code implementation NeurIPS 2023 Lorenzo Loconte, Nicola Di Mauro, Robert Peharz, Antonio Vergari

Some of the most successful knowledge graph embedding (KGE) models for link prediction -- CP, RESCAL, TuckER, ComplEx -- can be interpreted as energy-based models.

Knowledge Graph Embedding Knowledge Graph Embeddings +1

Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts

1 code implementation NeurIPS 2023 Emanuele Marconato, Stefano Teso, Antonio Vergari, Andrea Passerini

Neuro-Symbolic (NeSy) predictive models hold the promise of improved compliance with given constraints, systematic generalization, and interpretability, as they allow to infer labels that are consistent with some prior knowledge by reasoning over high-level concepts extracted from sub-symbolic inputs.

Systematic Generalization

Subtractive Mixture Models via Squaring: Representation and Learning

2 code implementations1 Oct 2023 Lorenzo Loconte, Aleksanteri M. Sladek, Stefan Mengel, Martin Trapp, Arno Solin, Nicolas Gillis, Antonio Vergari

Mixture models are traditionally represented and learned by adding several distributions as components.

Taming the Sigmoid Bottleneck: Provably Argmaxable Sparse Multi-Label Classification

1 code implementation16 Oct 2023 Andreas Grivas, Antonio Vergari, Adam Lopez

We then show that they can be prevented in practice by introducing a Discrete Fourier Transform (DFT) output layer, which guarantees that all sparse label combinations with up to $k$ active labels are argmaxable.

Multi-class Classification Multi-Label Classification

Probabilistic Integral Circuits

no code implementations25 Oct 2023 Gennaro Gala, Cassio de Campos, Robert Peharz, Antonio Vergari, Erik Quaeghebeur

In contrast, probabilistic circuits (PCs) are hierarchical discrete mixtures represented as computational graphs composed of input, sum and product units.

PIXAR: Auto-Regressive Language Modeling in Pixel Space

no code implementations6 Jan 2024 Yintao Tai, Xiyang Liao, Alessandro Suglia, Antonio Vergari

However, these pixel-based LLMs are limited to discriminative tasks (e. g., classification) and, similar to BERT, cannot be used to generate text.

LAMBADA Language Modelling +3

BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts

1 code implementation19 Feb 2024 Emanuele Marconato, Samuele Bortolotti, Emile van Krieken, Antonio Vergari, Andrea Passerini, Stefano Teso

Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge - encoding, e. g., safety constraints - can be affected by Reasoning Shortcuts (RSs): They learn concepts consistent with the symbolic knowledge by exploiting unintended semantics.

On the Independence Assumption in Neurosymbolic Learning

no code implementations12 Apr 2024 Emile van Krieken, Pasquale Minervini, Edoardo M. Ponti, Antonio Vergari

Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning.

Uncertainty Quantification valid

Cannot find the paper you are looking for? You can Submit a new open access paper.