Search Results for author: Wonbin Kweon

Found 11 papers, 7 papers with code

Rectifying Demonstration Shortcut in In-Context Learning

1 code implementation14 Mar 2024 Joonwon Jang, Sanghwan Jang, Wonbin Kweon, Minjin Jeon, Hwanjo Yu

However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction.

In-Context Learning

Deep Rating Elicitation for New Users in Collaborative Filtering

1 code implementation26 Feb 2024 Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu

Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.

Collaborative Filtering Recommendation Systems

Confidence Calibration for Recommender Systems and Its Applications

no code implementations26 Feb 2024 Wonbin Kweon

Despite the importance of having a measure of confidence in recommendation results, it has been surprisingly overlooked in the literature compared to the accuracy of the recommendation.

Recommendation Systems

Top-Personalized-K Recommendation

no code implementations26 Feb 2024 Wonbin Kweon, SeongKu Kang, Sanghwan Jang, Hwanjo Yu

To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.

Doubly Calibrated Estimator for Recommendation on Data Missing Not At Random

no code implementations26 Feb 2024 Wonbin Kweon, Hwanjo Yu

On this basis, we propose a Doubly Calibrated Estimator that involves the calibration of both the imputation and propensity models.

Imputation Recommendation Systems +1

Distillation from Heterogeneous Models for Top-K Recommendation

1 code implementation2 Mar 2023 SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu

Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy.

Knowledge Distillation Recommendation Systems +1

Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering

1 code implementation26 Feb 2022 SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu

ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.

Collaborative Filtering

Obtaining Calibrated Probabilities with Personalized Ranking Models

1 code implementation9 Dec 2021 Wonbin Kweon, SeongKu Kang, Hwanjo Yu

Extensive evaluations with various personalized ranking models on real-world datasets show that both the proposed calibration methods and the unbiased empirical risk minimization significantly improve the calibration performance.

Image Classification

Topology Distillation for Recommender System

no code implementations16 Jun 2021 SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu

To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.

Knowledge Distillation Model Compression +1

Bidirectional Distillation for Top-K Recommender System

1 code implementation5 Jun 2021 Wonbin Kweon, SeongKu Kang, Hwanjo Yu

Recommender systems (RS) have started to employ knowledge distillation, which is a model compression technique training a compact model (student) with the knowledge transferred from a cumbersome model (teacher).

Knowledge Distillation Model Compression +1

DE-RRD: A Knowledge Distillation Framework for Recommender System

2 code implementations8 Dec 2020 SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu

Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.