1 code implementation • ACL 2022 • Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example.
no code implementations • SemEval (NAACL) 2022 • Daewook Kang, Sung-Min Lee, Eunhwan Park, Seung-Hoon Na
In this study, we examine the ability of contextualized representations of pretrained language model to distinguish whether sequences from instructional articles are plausible or implausible.
no code implementations • COLING 2022 • Eunhwan Park, Jong-Hyeon Lee, Jeon Dong Hyeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
This study proposes Semantic-Infused SElective Graph Reasoning (SISER) for fact verification, which newly presents semantic-level graph reasoning and injects its reasoning-enhanced representation into other types of graph-based and sequence-based reasoning methods.
no code implementations • 2 Apr 2024 • Donghoon Han, Seunghyeon Seo, Eunhwan Park, Seong-Uk Nam, Nojun Kwak
Multimodal and large language models (LLMs) have revolutionized the utilization of open-world knowledge, unlocking novel potentials across various tasks and applications.
Ranked #1 on Highlight Detection on QVHighlights
1 code implementation • Conference 2023 • Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
Transformer-based models for question answering (QA) over tables and texts confront a “long” hybrid sequence over tabular and textual elements, causing long-range reasoning problems.
Ranked #1 on Question Answering on HybridQA