Testing the Ability of Language Models to Interpret Figurative Language

NAACL 2022  ·  Emmy Liu, Chen Cui, Kenneth Zheng, Graham Neubig ·

Figurative and metaphorical language are commonplace in discourse, and figurative expressions play an important role in communication and cognition. However, figurative language has been a relatively under-studied area in NLP, and it remains an open question to what extent modern language models can interpret nonliteral phrases. To address this question, we introduce Fig-QA, a Winograd-style nonliteral language understanding task consisting of correctly interpreting paired figurative phrases with divergent meanings. We evaluate the performance of several state-of-the-art language models on this task, and find that although language models achieve performance significantly over chance, they still fall short of human performance, particularly in zero- or few-shot settings. This suggests that further work is needed to improve the nonliteral reasoning capabilities of language models.

PDF Abstract NAACL 2022 PDF NAACL 2022 Abstract

Datasets


Introduced in the Paper:

Fig-QA

Used in the Paper:

WinoGrande ANLI

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here