Search Results for author: Sammy Khalife

Found 7 papers, 0 papers with code

Is uniform expressivity too restrictive? Towards efficient expressivity of graph neural networks

no code implementations2 Oct 2024 Sammy Khalife, Josué Tonelli-Cueto

Uniform expressivity guarantees that a Graph Neural Network (GNN) can express a query without the parameters depending on the size of the input graphs.

Graph Neural Network

Sequence graphs realizations and ambiguity in language models

no code implementations13 Feb 2024 Sammy Khalife, Yann Ponty, Laurent Bulteau

For a window of size at least 3, we prove hardness of all variants, even when w is considered as a constant, with the notable exception of the undirected/unweighted case for which we propose an XP algorithms for both (realizability and enumeration) problems, tight due to a corresponding W[1]-hardness result.

Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut

no code implementations4 Feb 2024 Hongyu Cheng, Sammy Khalife, Barbara Fiedorowicz, Amitabh Basu

We build upon recent work in this line of research by considering the setup where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved, using neural networks.

The logic of rational graph neural networks

no code implementations19 Oct 2023 Sammy Khalife

In this article, we prove that some GC2 queries of depth $3$ cannot be expressed by GNNs with any rational activation function.

On the power of graph neural networks and the role of the activation function

no code implementations10 Jul 2023 Sammy Khalife, Amitabh Basu

In contrast, it was already known that unbounded GNNs (those whose size is allowed to change with the graph sizes) with piecewise polynomial activations can distinguish these vertices in only two iterations.

Neural networks with linear threshold activations: structure and algorithms

no code implementations15 Nov 2021 Sammy Khalife, Hongyu Cheng, Amitabh Basu

We precisely characterize the class of functions that are representable by such neural networks and show that 2 hidden layers are necessary and sufficient to represent any function representable in the class.

Cannot find the paper you are looking for? You can Submit a new open access paper.