no code implementations • 8 Nov 2024 • Sangam Lee, Ryang Heo, SeongKu Kang, Susik Yoon, Jinyoung Yeo, Dongha Lee
HyPE leverages hierarchical category paths as explanation, progressing from broad to specific semantic categories.
no code implementations • 25 Oct 2024 • SeongKu Kang, Yunyi Zhang, Pengcheng Jiang, Dongha Lee, Jiawei Han, Hwanjo Yu
Academic paper search is an essential task for efficient literature discovery and scientific advancement.
no code implementations • 16 Aug 2024 • Tongyoung Kim, Soojin Yoon, SeongKu Kang, Jinyoung Yeo, Dongha Lee
Our in-depth analysis finds that there is a significant difference in the knowledge captured by the model from heterogeneous item indices and diverse input prompts, which can have a high potential for complementarity.
no code implementations • 12 Aug 2024 • Jieyong Kim, Hyunseo Kim, Hyunjin Cho, SeongKu Kang, Buru Chang, Jinyoung Yeo, Dongha Lee
Recent advancements in Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, generating significant interest in their application to recommendation systems.
1 code implementation • 22 Jul 2024 • Soojin Yoon, Sungho Ko, Tongyoung Kim, SeongKu Kang, Jinyoung Yeo, Dongha Lee
In this paper, we propose ERAlign, an unsupervised and robust cross-lingual EA pipeline that jointly performs Entity-level and Relation-level Alignment by neighbor triple matching strategy using semantic textual features of relations and entities.
1 code implementation • 19 Jul 2024 • SeongKu Kang
This dissertation is devoted to developing knowledge distillation methods for recommender systems to fully improve the performance of a compact model.
1 code implementation • 17 Jul 2024 • Jeongeun Lee, SeongKu Kang, Won-Yong Shin, Jeongwhan Choi, Noseong Park, Dongha Lee
Cross-domain recommendation (CDR) extends conventional recommender systems by leveraging user-item interactions from dense domains to mitigate data sparsity and the cold start problem.
no code implementations • CVPR 2024 • Suyeon Kim, Dongha Lee, SeongKu Kang, Sukang Chae, Sanghwan Jang, Hwanjo Yu
In this paper, we propose DynaCor framework that distinguishes incorrectly labeled instances from correctly labeled ones based on the dynamics of the training signals.
2 code implementations • 29 May 2024 • Gyuseok Lee, SeongKu Kang, Wonbin Kweon, Hwanjo Yu
We expect this research direction to contribute to narrowing the gap between existing KD studies and practical applications, thereby enhancing the applicability of KD in real-world systems.
no code implementations • 26 Mar 2024 • Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu
Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.
1 code implementation • 7 Mar 2024 • Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee
Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.
1 code implementation • 7 Mar 2024 • SeongKu Kang, Shivam Agarwal, Bowen Jin, Dongha Lee, Hwanjo Yu, Jiawei Han
Document retrieval has greatly benefited from the advancements of large-scale pre-trained language models (PLMs).
1 code implementation • 1 Mar 2024 • Jieyong Kim, Ryang Heo, Yongsik Seo, SeongKu Kang, Jinyoung Yeo, Dongha Lee
In the task of aspect sentiment quad prediction (ASQP), generative methods for predicting sentiment quads have shown promising results.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Sanghwan Jang, Hwanjo Yu
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu
Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.
no code implementations • 23 Oct 2023 • Yu Zhang, Yanzhen Shen, SeongKu Kang, Xiusi Chen, Bowen Jin, Jiawei Han
To address this issue, we propose a unified model for paper-reviewer matching that jointly considers semantic, topic, and citation factors.
1 code implementation • 5 Sep 2023 • Youngjune Lee, Yeongjong Jeong, Keunchan Park, SeongKu Kang
Feature selection, which is a technique to select key features in recommender systems, has received increasing research attention.
1 code implementation • 2 Mar 2023 • SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu
Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy.
1 code implementation • 27 Feb 2023 • Su Kim, Dongha Lee, SeongKu Kang, Seonghyeon Lee, Hwanjo Yu
In this paper, motivated by this observation, we propose TopExpert to leverage topology-specific prediction models (referred to as experts), each of which is responsible for each molecular group sharing similar topological semantics.
1 code implementation • 26 Feb 2022 • SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu
ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.
no code implementations • 18 Jan 2022 • Dongha Lee, Jiaming Shen, SeongKu Kang, Susik Yoon, Jiawei Han, Hwanjo Yu
Topic taxonomies, which represent the latent topic (or category) structure of document collections, provide valuable knowledge of contents in many applications such as web search and information filtering.
1 code implementation • 9 Dec 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Extensive evaluations with various personalized ranking models on real-world datasets show that both the proposed calibration methods and the unbiased empirical risk minimization significantly improve the calibration performance.
1 code implementation • 8 Jul 2021 • Junsu Cho, SeongKu Kang, Dongmin Hyun, Hwanjo Yu
Session-based Recommender Systems (SRSs) have been actively developed to recommend the next item of an anonymous short item sequence (i. e., session).
no code implementations • 16 Jun 2021 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.
1 code implementation • 5 Jun 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Recommender systems (RS) have started to employ knowledge distillation, which is a model compression technique training a compact model (student) with the knowledge transferred from a cumbersome model (teacher).
no code implementations • 13 May 2021 • Dongha Lee, SeongKu Kang, Hyunjun Ju, Chanyoung Park, Hwanjo Yu
To make the representations of positively-related users and items similar to each other while avoiding a collapsed solution, BUIR adopts two distinct encoder networks that learn from each other; the first encoder is trained to predict the output of the second encoder as its target, while the second encoder provides the consistent targets by slowly approximating the first encoder.
1 code implementation • 29 Apr 2021 • Junsu Cho, Dongmin Hyun, SeongKu Kang, Hwanjo Yu
Existing studies regard the time information as a single type of feature and focus on how to associate it with user preferences on items.
no code implementations • 1 Jan 2021 • Hyunjun Ju, Dongha Lee, SeongKu Kang, Hwanjo Yu
Recent studies on one-class classification have achieved a remarkable performance, by employing the self-supervised classifier that predicts the geometric transformation applied to in-class images.
2 code implementations • 8 Dec 2020 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.