1 code implementation • 31 Jul 2019 • Hoyeop Lee, Jinbae Im, Seongwon Jang, Hyunsouk Cho, Sehee Chung
This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items.
1 code implementation • ACL 2021 • Jinbae Im, Moonki Kim, Hoyeop Lee, Hyunsouk Cho, Sehee Chung
To use the abundant information contained in non-text data, we propose a self-supervised multimodal opinion summarization framework called MultimodalSum.
1 code implementation • 10 Jul 2022 • Jeong Choi, Seongwon Jang, Hyunsouk Cho, Sehee Chung
The common research goal of self-supervised learning is to extract a general representation which an arbitrary downstream task would benefit from.
1 code implementation • 11 Dec 2023 • Heegyu Kim, Hyunsouk Cho
Our findings reveal that gated toxicity avoidance efficiently achieves comparable levels of toxicity reduction to the original CTG methods while preserving the generation performance of the language model.
no code implementations • LREC 2020 • Gyeongbok Lee, Seung-won Hwang, Hyunsouk Cho
Existing machine reading comprehension models are reported to be brittle for adversarially perturbed questions when optimizing only for accuracy, which led to the creation of new reading comprehension benchmarks, such as SQuAD 2. 0 which contains such type of questions.
no code implementations • 23 May 2021 • Seongwon Jang, Hoyeop Lee, Hyunsouk Cho, Sehee Chung
To eliminate this issue, we propose a framework called CITIES, which aims to enhance the quality of the tail-item embeddings by training an embedding-inference function using multiple contextual head items so that the recommendation performance improves for not only the tail items but also for the head items.
no code implementations • 20 Feb 2024 • Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo
Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy.
no code implementations • 23 Feb 2024 • Heegyu Kim, Sehyun Yuk, Hyunsouk Cho
We propose self-refine with formatting that achieves outstanding safety even in non-safety-aligned LMs and evaluate our method alongside several defense baselines, demonstrating that it is the safest training-free method against jailbreak attacks.