1 code implementation • 12 Oct 2023 • Cheongwoong Kang, Jaesik Choi
Consequently, LLMs struggle to recall facts whose subject and object rarely co-occur in the pre-training dataset although they are seen during finetuning.
1 code implementation • 1 Sep 2022 • Sunjae Kwon, Cheongwoong Kang, Jiyeon Han, Jaesik Choi
We exemplify the possibility to overcome the limitations of the MNLM-based RC models by enriching text with the required knowledge from an external commonsense knowledge repository in controlled experiments.
no code implementations • 8 Nov 2019 • Sunjae Kwon, Cheongwoong Kang, Jiyeon Han, Jaesik Choi
From the test, we observed that MNLMs partially understand various types of common sense knowledge but do not accurately understand the semantic meaning of relations.