Pointer Value Retrieval: A new benchmark for understanding the limits of neural network generalization

27 Jul 2021  ·  Chiyuan Zhang, Maithra Raghu, Jon Kleinberg, Samy Bengio ·

Central to the success of artificial neural networks is their ability to generalize. But does neural network generalization primarily rely on seeing highly similar training examples (memorization)? Or are neural networks capable of human-intelligence styled reasoning, and if so, to what extent? These remain fundamental open questions on artificial neural networks. In this paper, as steps towards answering these questions, we introduce a new benchmark, Pointer Value Retrieval (PVR) to study the limits of neural network reasoning. The PVR suite of tasks is based on reasoning about indirection, a hallmark of human intelligence, where a first stage (task) contains instructions for solving a second stage (task). In PVR, this is done by having one part of the task input act as a pointer, giving instructions on a different input location, which forms the output. We show this simple rule can be applied to create a diverse set of tasks across different input modalities and configurations. Importantly, this use of indirection enables systematically varying task difficulty through distribution shifts and increasing functional complexity. We conduct a detailed empirical study of different PVR tasks, discovering large variations in performance across dataset sizes, neural network architectures and task complexity. Further, by incorporating distribution shift and increased functional complexity, we develop nuanced tests for reasoning, revealing subtle failures and surprising successes, suggesting many promising directions of exploration on this benchmark.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here