We present a novel edge-level ego-network encoding for learning on graphs that can boost Message Passing Graph Neural Networks (MP-GNNs) by providing additional node and edge features or extending message-passing formats.
Identifying similar network structures is key to capture graph isomorphisms and learn representations that exploit structural information encoded in graph data.
In this work we present a novel approach to hierarchical reinforcement learning for linearly-solvable Markov decision processes.
We identify PICE as the infinite smoothing limit of such technique and show that the sample efficiency problems that PICE suffers disappear for finite levels of smoothing.
Likelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system.
Ranked #10 on Anomaly Detection on Unlabeled CIFAR-10 vs CIFAR-100
Can we design ranking models that understand the consequences of their proposed rankings and, more importantly, are able to avoid the undesirable ones?
Surprisingly, we observe that the representation learned by the neural network can be used as a feature space for the width-based planner without degrading its performance, thus removing the requirement of pre-defined features for the planner.
Ranking algorithms play a crucial role in online platforms ranging from search engines to recommender systems.
We present the Vent dataset, the largest annotated dataset of text, emotions, and social connections to date.
Social and Information Networks Human-Computer Interaction
Social Live Stream Services (SLSS) exploit a new level of social interaction.
Social and Information Networks
The planning step hinges on the Iterated-Width (IW) planner, a state of the art planner that makes explicit use of the state representation to perform structured exploration.
We propose a general framework for entropy-regularized average-reward reinforcement learning in Markov decision processes (MDPs).
We present a hierarchical reinforcement learning framework that formulates each task in the hierarchy as a special type of Markov decision process for which the Bellman equation is linear and has analytical solution.