no code implementations • 7 Dec 2024 • Boxun Xu, Junyoung Hwang, Pruek Vanna-iampikul, Yuxuan Yin, Sung Kyu Lim, Peng Li
Spiking Neural Networks(SNNs) provide a brain-inspired and event-driven mechanism that is believed to be critical to unlock energy-efficient deep learning.
no code implementations • 11 Nov 2024 • Boxun Xu, Junyoung Hwang, Pruek Vanna-iampikul, Sung Kyu Lim, Peng Li
Spiking neural networks (SNNs) are powerful models of spatiotemporal computation and are well suited for deployment on resource-constrained edge devices and neuromorphic hardware due to their low power consumption.
no code implementations • 26 Mar 2024 • Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu
Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu
Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.
1 code implementation • 26 Feb 2022 • SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu
ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.
no code implementations • 16 Jun 2021 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.
2 code implementations • 8 Dec 2020 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.