no code implementations • 26 Mar 2024 • Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu
Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.
1 code implementation • 7 Mar 2024 • SeongKu Kang, Shivam Agarwal, Bowen Jin, Dongha Lee, Hwanjo Yu, Jiawei Han
Document retrieval has greatly benefited from the advancements of large-scale pre-trained language models (PLMs).
no code implementations • 7 Mar 2024 • Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee
Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.
no code implementations • 1 Mar 2024 • Jieyong Kim, Ryang Heo, Yongsik Seo, SeongKu Kang, Jinyoung Yeo, Dongha Lee
In the task of aspect sentiment quad prediction (ASQP), generative methods for predicting sentiment quads have shown promising results.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu
Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.
no code implementations • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Sanghwan Jang, Hwanjo Yu
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
1 code implementation • 5 Sep 2023 • Youngjune Lee, Yeongjong Jeong, Keunchan Park, SeongKu Kang
Feature selection, which is a technique to select key features in recommender systems, has received increasing research attention.
1 code implementation • 2 Mar 2023 • SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu
Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy.
1 code implementation • 27 Feb 2023 • Su Kim, Dongha Lee, SeongKu Kang, Seonghyeon Lee, Hwanjo Yu
In this paper, motivated by this observation, we propose TopExpert to leverage topology-specific prediction models (referred to as experts), each of which is responsible for each molecular group sharing similar topological semantics.
1 code implementation • 26 Feb 2022 • SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu
ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.
no code implementations • 18 Jan 2022 • Dongha Lee, Jiaming Shen, SeongKu Kang, Susik Yoon, Jiawei Han, Hwanjo Yu
Topic taxonomies, which represent the latent topic (or category) structure of document collections, provide valuable knowledge of contents in many applications such as web search and information filtering.
1 code implementation • 9 Dec 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Extensive evaluations with various personalized ranking models on real-world datasets show that both the proposed calibration methods and the unbiased empirical risk minimization significantly improve the calibration performance.
1 code implementation • 8 Jul 2021 • Junsu Cho, SeongKu Kang, Dongmin Hyun, Hwanjo Yu
Session-based Recommender Systems (SRSs) have been actively developed to recommend the next item of an anonymous short item sequence (i. e., session).
no code implementations • 16 Jun 2021 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.
1 code implementation • 5 Jun 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Recommender systems (RS) have started to employ knowledge distillation, which is a model compression technique training a compact model (student) with the knowledge transferred from a cumbersome model (teacher).
no code implementations • 13 May 2021 • Dongha Lee, SeongKu Kang, Hyunjun Ju, Chanyoung Park, Hwanjo Yu
To make the representations of positively-related users and items similar to each other while avoiding a collapsed solution, BUIR adopts two distinct encoder networks that learn from each other; the first encoder is trained to predict the output of the second encoder as its target, while the second encoder provides the consistent targets by slowly approximating the first encoder.
1 code implementation • 29 Apr 2021 • Junsu Cho, Dongmin Hyun, SeongKu Kang, Hwanjo Yu
Existing studies regard the time information as a single type of feature and focus on how to associate it with user preferences on items.
no code implementations • 1 Jan 2021 • Hyunjun Ju, Dongha Lee, SeongKu Kang, Hwanjo Yu
Recent studies on one-class classification have achieved a remarkable performance, by employing the self-supervised classifier that predicts the geometric transformation applied to in-class images.
2 code implementations • 8 Dec 2020 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.