Using Grounded Word Representations to Study Theories of Lexical Concepts

WS 2019  ·  Dylan Ebert, Ellie Pavlick ·

The fields of cognitive science and philosophy have proposed many different theories for how humans represent {``}concepts{''}. Multiple such theories are compatible with state-of-the-art NLP methods, and could in principle be operationalized using neural networks. We focus on two particularly prominent theories{--}Classical Theory and Prototype Theory{--}in the context of visually-grounded lexical representations. We compare when and how the behavior of models based on these theories differs in terms of categorization and entailment tasks. Our preliminary results suggest that Classical-based representations perform better for entailment and Prototype-based representations perform better for categorization. We discuss plans for additional experiments needed to confirm these initial observations.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here