1 code implementation • 29 Jan 2025 • Wonbin Kweon, Sanghwan Jang, SeongKu Kang, Hwanjo Yu
Despite the widespread adoption of large language models (LLMs) for recommendation, we demonstrate that LLMs often exhibit uncertainty in their recommendations.
no code implementations • CVPR 2024 • Suyeon Kim, Dongha Lee, SeongKu Kang, Sukang Chae, Sanghwan Jang, Hwanjo Yu
In this paper, we propose DynaCor framework that distinguishes incorrectly labeled instances from correctly labeled ones based on the dynamics of the training signals.
no code implementations • 26 Mar 2024 • Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu
Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.
no code implementations • 15 Mar 2024 • Seonghyeon Lee, Sanghwan Jang, Seongbo Jang, Dongha Lee, Hwanjo Yu
However, our analysis also reveals the model's underutilized behavior to call the auxiliary function, suggesting the future direction to enhance their implementation by eliciting the auxiliary function call ability encoded in the models.
1 code implementation • 14 Mar 2024 • Joonwon Jang, Sanghwan Jang, Wonbin Kweon, Minjin Jeon, Hwanjo Yu
However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Sanghwan Jang, Hwanjo Yu
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
1 code implementation • 7 Dec 2022 • Sanghwan Jang
In this report, I address auto-formulation of problem description, the task of converting an optimization problem into a canonical representation.