COPEN (COnceptual knowledge Probing bENchmark)

Introduced by Peng et al. in COPEN: Probing Conceptual Knowledge in Pre-trained Language Models

COPEN is a COnceptual knowledge Probing benchmark that aims to analyze the conceptual understanding capabilities of Pre-trained Language Models (PLMs). Specifically, COPEN consists of three tasks:

  1. Conceptual Similarity Judgment (CSJ). Given a query entity and several candidate entities, the CSJ task requires selecting the most conceptually similar candidate entity to the query entity.
  2. Conceptual Property Judgment (CPJ). Given a statement describing a property of a concept, PLMs need to judge whether the statement is true.
  3. Conceptualization in Contexts (CiC). Given a sentence, an entity mentioned in the sentence, and several concept chains of the entity, PLMs need to select the most appropriate concept according to the context of the entity.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages