Search Results for author: Hyunsouk Cho

Found 9 papers, 5 papers with code

MeLU: Meta-Learned User Preference Estimator for Cold-Start Recommendation

1 code implementation31 Jul 2019 Hoyeop Lee, Jinbae Im, Seongwon Jang, Hyunsouk Cho, Sehee Chung

This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items.

Evidence Selection Meta-Learning +1

Self-Supervised Multimodal Opinion Summarization

1 code implementation ACL 2021 Jinbae Im, Moonki Kim, Hoyeop Lee, Hyunsouk Cho, Sehee Chung

To use the abundant information contained in non-text data, we propose a self-supervised multimodal opinion summarization framework called MultimodalSum.

Opinion Summarization

Towards Proper Contrastive Self-supervised Learning Strategies For Music Audio Representation

1 code implementation10 Jul 2022 Jeong Choi, Seongwon Jang, Hyunsouk Cho, Sehee Chung

The common research goal of self-supervised learning is to extract a general representation which an arbitrary downstream task would benefit from.

Contrastive Learning Information Retrieval +3

GTA: Gated Toxicity Avoidance for LM Performance Preservation

1 code implementation11 Dec 2023 Heegyu Kim, Hyunsouk Cho

Our findings reveal that gated toxicity avoidance efficiently achieves comparable levels of toxicity reduction to the original CTG methods while preserving the generation performance of the language model.

Language Modelling Text Generation

SQuAD2-CR: Semi-supervised Annotation for Cause and Rationales for Unanswerability in SQuAD 2.0

no code implementations LREC 2020 Gyeongbok Lee, Seung-won Hwang, Hyunsouk Cho

Existing machine reading comprehension models are reported to be brittle for adversarially perturbed questions when optimizing only for accuracy, which led to the creation of new reading comprehension benchmarks, such as SQuAD 2. 0 which contains such type of questions.

Machine Reading Comprehension

CITIES: Contextual Inference of Tail-Item Embeddings for Sequential Recommendation

no code implementations23 May 2021 Seongwon Jang, Hoyeop Lee, Hyunsouk Cho, Sehee Chung

To eliminate this issue, we propose a framework called CITIES, which aims to enhance the quality of the tail-item embeddings by training an embedding-inference function using multiple contextual head items so that the recommendation performance improves for not only the tail items but also for the head items.

Sequential Recommendation

Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation

no code implementations20 Feb 2024 Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo

Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy.

Emotional Intelligence

Break the Breakout: Reinventing LM Defense Against Jailbreak Attacks with Self-Refinement

no code implementations23 Feb 2024 Heegyu Kim, Sehyun Yuk, Hyunsouk Cho

We propose self-refine with formatting that achieves outstanding safety even in non-safety-aligned LMs and evaluate our method alongside several defense baselines, demonstrating that it is the safest training-free method against jailbreak attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.