Search Results for author: Junyoung Hwang

Found 7 papers, 3 papers with code

Towards 3D Acceleration for low-power Mixture-of-Experts and Multi-Head Attention Spiking Transformers

no code implementations7 Dec 2024 Boxun Xu, Junyoung Hwang, Pruek Vanna-iampikul, Yuxuan Yin, Sung Kyu Lim, Peng Li

Spiking Neural Networks(SNNs) provide a brain-inspired and event-driven mechanism that is believed to be critical to unlock energy-efficient deep learning.

Mixture-of-Experts

Spiking Transformer Hardware Accelerators in 3D Integration

no code implementations11 Nov 2024 Boxun Xu, Junyoung Hwang, Pruek Vanna-iampikul, Sung Kyu Lim, Peng Li

Spiking neural networks (SNNs) are powerful models of spatiotemporal computation and are well suited for deployment on resource-constrained edge devices and neuromorphic hardware due to their low power consumption.

Multi-Domain Recommendation to Attract Users via Domain Preference Modeling

no code implementations26 Mar 2024 Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu

Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.

Deep Rating Elicitation for New Users in Collaborative Filtering

1 code implementation26 Feb 2024 Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu

Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.

Collaborative Filtering Recommendation Systems

Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering

1 code implementation26 Feb 2022 SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu

ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.

Collaborative Filtering

Topology Distillation for Recommender System

no code implementations16 Jun 2021 SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu

To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.

Knowledge Distillation Model Compression +1

DE-RRD: A Knowledge Distillation Framework for Recommender System

2 code implementations8 Dec 2020 SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu

Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.