Relational reasoning and generalization using non-symbolic neural networks

14 Jun 2020  ·  Atticus Geiger, Alexandra Carstensen, Michael C. Frank, Christopher Potts ·

The notion of equality (identity) is simple and ubiquitous, making it a key case study for broader questions about the representations supporting abstract relational reasoning. Previous work suggested that neural networks were not suitable models of human relational reasoning because they could not represent mathematically identity, the most basic form of equality. We revisit this question. In our experiments, we assess out-of-sample generalization of equality using both arbitrary representations and representations that have been pretrained on separate tasks to imbue them with structure. We find neural networks are able to learn (1) basic equality (mathematical identity), (2) sequential equality problems (learning ABA-patterned sequences) with only positive training instances, and (3) a complex, hierarchical equality problem with only basic equality training instances ("zero-shot'" generalization). In the two latter cases, our models perform tasks proposed in previous work to demarcate human-unique symbolic abilities. These results suggest that essential aspects of symbolic reasoning can emerge from data-driven, non-symbolic learning processes.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here