Using Grounded Word Representations to Study Theories of Lexical Concepts
The fields of cognitive science and philosophy have proposed many different theories for how humans represent {``}concepts{''}. Multiple such theories are compatible with state-of-the-art NLP methods, and could in principle be operationalized using neural networks. We focus on two particularly prominent theories{--}Classical Theory and Prototype Theory{--}in the context of visually-grounded lexical representations. We compare when and how the behavior of models based on these theories differs in terms of categorization and entailment tasks. Our preliminary results suggest that Classical-based representations perform better for entailment and Prototype-based representations perform better for categorization. We discuss plans for additional experiments needed to confirm these initial observations.
PDF Abstract