COPEN is a COnceptual knowledge Probing benchmark that aims to analyze the conceptual understanding capabilities of Pre-trained Language Models (PLMs). Specifically, COPEN consists of three tasks:
- Conceptual Similarity Judgment (CSJ). Given a query entity and several candidate entities, the CSJ task requires selecting the most conceptually similar candidate entity to the query entity.
- Conceptual Property Judgment (CPJ). Given a statement describing a property of a concept, PLMs need to judge whether the statement is true.
- Conceptualization in Contexts (CiC). Given a sentence, an entity mentioned in the sentence, and several concept chains of the entity, PLMs need to select the most appropriate concept according to the context of the entity.