Search Results for author: Cheongwoong Kang

Found 3 papers, 2 papers with code

Impact of Co-occurrence on Factual Knowledge of Large Language Models

1 code implementation12 Oct 2023 Cheongwoong Kang, Jaesik Choi

Consequently, LLMs struggle to recall facts whose subject and object rarely co-occur in the pre-training dataset although they are seen during finetuning.

Why Do Neural Language Models Still Need Commonsense Knowledge to Handle Semantic Variations in Question Answering?

1 code implementation1 Sep 2022 Sunjae Kwon, Cheongwoong Kang, Jiyeon Han, Jaesik Choi

We exemplify the possibility to overcome the limitations of the MNLM-based RC models by enriching text with the required knowledge from an external commonsense knowledge repository in controlled experiments.

Question Answering Reading Comprehension

Why Do Masked Neural Language Models Still Need Common Sense Knowledge?

no code implementations8 Nov 2019 Sunjae Kwon, Cheongwoong Kang, Jiyeon Han, Jaesik Choi

From the test, we observed that MNLMs partially understand various types of common sense knowledge but do not accurately understand the semantic meaning of relations.

Common Sense Reasoning Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.