1 code implementation • 11 Apr 2022 • Kyushik Min, Hyunho Lee, Kwansu Shin, Taehak Lee, Hojoon Lee, Jinwon Choi, Sungho Son
Recently, Reinforcement Learning (RL) has been actively researched in both academic and industrial fields.
1 code implementation • 27 Apr 2022 • Hojoon Lee, Dongyoon Hwang, Hyunseung Kim, Byungkun Lee, Jaegul Choo
To alleviate this problem, we propose DraftRec, a novel hierarchical model which recommends characters by considering each player's champion preferences and the interaction between the players.
1 code implementation • 12 Oct 2022 • Junwoo Park, Youngwoo Cho, Gyuhyeon Sim, Hojoon Lee, Jaegul Choo
By exploiting the advantage of the game environment, we construct a gunshot dataset, namely BGG, for the firearm classification and gunshot localization tasks.
1 code implementation • 9 Jun 2023 • Hojoon Lee, Koanho Lee, Dongyoon Hwang, Hyunho Lee, Byungkun Lee, Jaegul Choo
To address this issue, we propose a novel URL framework that causally predicts future states while increasing the dimension of the latent manifold by decorrelating the features in the latent space.
1 code implementation • 21 Aug 2023 • Hojoon Lee, Hawon Jeong, Byungkun Lee, Kyungyup Lee, Jaegul Choo
In this paper, we introduce ST-RAP, a novel Spatio-Temporal framework for Real estate APpraisal.
no code implementations • 1 Jan 2021 • Junsoo Lee, Hojoon Lee, Inkyu Shin, Jaekyoung Bae, In So Kweon, Jaegul Choo
Learning visual representations using large-scale unlabelled images is a holy grail for most of computer vision tasks.
no code implementations • 17 Aug 2021 • Hojoon Lee, Dongyoon Hwang, Sunghwan Hong, Changyeon Kim, Seungryong Kim, Jaegul Choo
Successful sequential recommendation systems rely on accurately capturing the user's short-term and long-term interest.
no code implementations • 5 Nov 2021 • Kha Dinh Duy, Taehyun Noh, Siwon Huh, Hojoon Lee
Hence, researchers have leveraged the Trusted Execution Environments (TEEs) to build confidential ML computation systems.
no code implementations • 22 Aug 2023 • Hojoon Lee, Dongyoon Hwang, Kyushik Min, Jaegul Choo
In this work, we revisited experiments on IRS with review datasets and compared RL-based models with a simple reward model that greedily recommends the item with the highest one-step reward.