1 code implementation • ACL 2022 • Eunhwan Park, Donghyeon Jeon, Seonhoon Kim, Inho Kang, Seung-Hoon Na
LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example.
no code implementations • 12 May 2025 • Ohjoon Kwon, Changsu Lee, Jihye Back, Lim Sun Suk, Inho Kang, Donghyeon Jeon
Large language models (LLMs) have been widely used for relevance assessment in information retrieval.
no code implementations • 30 May 2024 • Ohjoon Kwon, Donghyeon Jeon, Nayoung Choi, Gyu-Hwung Cho, Changbong Kim, Hyunwoo Lee, Inho Kang, Sun Kim, Taiwoo Park
In this paper, we leverage a smaller LLM for both harmful query detection and safeguard response generation.
no code implementations • 5 Apr 2024 • Hwiyeol Jo, Taiwoo Park, Hyunwoo Lee, Nayoung Choi, Changbong Kim, Ohjoon Kwon, Donghyeon Jeon, Eui-Hyeon Lee, Kyoungho Shin, Sun Suk Lim, Kyungmi Kim, Jihye Lee, Sun Kim
Although there has been a growing interest among industries in integrating generative LLMs into their services, limited experience and scarcity of resources act as a barrier in launching and servicing large-scale LLM-based services.
no code implementations • 22 Aug 2023 • Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak
Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications.
no code implementations • 6 May 2023 • Seungwoo Lee, Chaerin Kong, Donghyeon Jeon, Nojun Kwak
Recent advances in diffusion models have showcased promising results in the text-to-video (T2V) synthesis task.
1 code implementation • Conference 2023 • Sung-Min Lee, Eunhwan Park, Daeryong Seo, Donghyeon Jeon, Inho Kang, Seung-Hoon Na
Transformer-based models for question answering (QA) over tables and texts confront a “long” hybrid sequence over tabular and textual elements, causing long-range reasoning problems.
Ranked #1 on
Question Answering
on HybridQA
no code implementations • 21 Nov 2022 • Jiho Jang, Chaerin Kong, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak
Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations.
no code implementations • 12 Oct 2022 • Chaerin Kong, Donghyeon Jeon, Ohjoon Kwon, Nojun Kwak
Fashion attribute editing is a task that aims to convert the semantic attributes of a given fashion image while preserving the irrelevant regions.