NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional Generalization

20 Dec 2024  ·  Danial Kamali, Elham J. Barezi, Parisa Kordjamshidi ·

Compositional generalization is crucial for artificial intelligence agents to solve complex vision-language reasoning tasks. Neuro-symbolic approaches have demonstrated promise in capturing compositional structures, but they face critical challenges: (a) reliance on predefined predicates for symbolic representations that limit adaptability, (b) difficulty in extracting predicates from raw data, and (c) using non-differentiable operations for combining primitive concepts. To address these issues, we propose NeSyCoCo, a neuro-symbolic framework that leverages large language models (LLMs) to generate symbolic representations and map them to differentiable neural computations. NeSyCoCo introduces three innovations: (a) augmenting natural language inputs with dependency structures to enhance the alignment with symbolic representations, (b) employing distributed word representations to link diverse, linguistically motivated logical predicates to neural modules, and (c) using the soft composition of normalized predicate scores to align symbolic and differentiable reasoning. Our framework achieves state-of-the-art results on the ReaSCAN and CLEVR-CoGenT compositional generalization benchmarks and demonstrates robust performance with novel concepts in the CLEVR-SYN benchmark.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Question Answering CLEVR NeSyCoCo Neuro-Symbolic Accuracy 99.7 # 1
Visual Question Answering (VQA) CLEVR NeSyCoCo Accuracy 99.7 # 2
Visual Question Answering (VQA) Split B CLEVR-CoGenT NeSyCoCo Accuracy 78.8 # 1
Visual Question Answering (VQA) Split A CLEVR-CoGenT NeSyCoCo Accuracy 99.6 # 2
Compositional Generalization (AVG) ReaSCAN NeSyCoCo Accuray 97.5 # 1

Methods