1 code implementation • NeurIPS 2021 • Marcelo Arenas, Daniel Baez, Pablo Barceló, Jorge Pérez, Bernardo Subercaseaux
Several queries and scores have recently been proposed to explain individual predictions over ML models.
1 code implementation • NeurIPS 2021 • Pablo Barceló, Floris Geerts, Juan Reutter, Maksimilian Ryschkov
We propose local graph parameter enabled GNNs as a framework for studying the latter kind of approaches and precisely characterize their distinguishing power, in terms of a variant of the WL test, and in terms of the graph structural properties that they can take into account.
no code implementations • 16 Apr 2021 • Marcelo Arenas, Pablo Barceló, Leopoldo Bertossi, Mikaël Monet
While in general computing Shapley values is an intractable problem, we prove a strong positive result stating that the $\mathsf{SHAP}$-score can be computed in polynomial time over deterministic and decomposable Boolean circuits.
no code implementations • NeurIPS 2020 • Pablo Barceló, Mikaël Monet, Jorge Pérez, Bernardo Subercaseaux
We prove that this notion provides a good theoretical counterpart to current beliefs on the interpretability of models; in particular, we show that under our definition and assuming standard complexity-theoretical assumptions (such as P$\neq$NP), both linear and tree-based models are strictly more interpretable than neural networks.
no code implementations • ICLR 2020 • Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, Juan Pablo Silva
We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs.
no code implementations • ICLR 2019 • Jorge Pérez, Javier Marinković, Pablo Barceló
Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences.