Search Results for author: Pablo Barceló

Found 6 papers, 2 papers with code

Foundations of Symbolic Languages for Model Interpretability

1 code implementation NeurIPS 2021 Marcelo Arenas, Daniel Baez, Pablo Barceló, Jorge Pérez, Bernardo Subercaseaux

Several queries and scores have recently been proposed to explain individual predictions over ML models.

Graph Neural Networks with Local Graph Parameters

1 code implementation NeurIPS 2021 Pablo Barceló, Floris Geerts, Juan Reutter, Maksimilian Ryschkov

We propose local graph parameter enabled GNNs as a framework for studying the latter kind of approaches and precisely characterize their distinguishing power, in terms of a variant of the WL test, and in terms of the graph structural properties that they can take into account.

Graph Learning

On the Complexity of SHAP-Score-Based Explanations: Tractability via Knowledge Compilation and Non-Approximability Results

no code implementations16 Apr 2021 Marcelo Arenas, Pablo Barceló, Leopoldo Bertossi, Mikaël Monet

While in general computing Shapley values is an intractable problem, we prove a strong positive result stating that the $\mathsf{SHAP}$-score can be computed in polynomial time over deterministic and decomposable Boolean circuits.

Model Interpretability through the Lens of Computational Complexity

no code implementations NeurIPS 2020 Pablo Barceló, Mikaël Monet, Jorge Pérez, Bernardo Subercaseaux

We prove that this notion provides a good theoretical counterpart to current beliefs on the interpretability of models; in particular, we show that under our definition and assuming standard complexity-theoretical assumptions (such as P$\neq$NP), both linear and tree-based models are strictly more interpretable than neural networks.

The Logical Expressiveness of Graph Neural Networks

no code implementations ICLR 2020 Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, Juan Pablo Silva

We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs.

On the Turing Completeness of Modern Neural Network Architectures

no code implementations ICLR 2019 Jorge Pérez, Javier Marinković, Pablo Barceló

Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences.

Cannot find the paper you are looking for? You can Submit a new open access paper.