1 code implementation • 10 Jul 2022 • Jeong Choi, Seongwon Jang, Hyunsouk Cho, Sehee Chung
The common research goal of self-supervised learning is to extract a general representation which an arbitrary downstream task would benefit from.
1 code implementation • ACL 2021 • Jinbae Im, Moonki Kim, Hoyeop Lee, Hyunsouk Cho, Sehee Chung
To use the abundant information contained in non-text data, we propose a self-supervised multimodal opinion summarization framework called MultimodalSum.
no code implementations • 23 May 2021 • Seongwon Jang, Hoyeop Lee, Hyunsouk Cho, Sehee Chung
To eliminate this issue, we propose a framework called CITIES, which aims to enhance the quality of the tail-item embeddings by training an embedding-inference function using multiple contextual head items so that the recommendation performance improves for not only the tail items but also for the head items.
no code implementations • 11 Feb 2021 • Hoyeop Lee, Jinbae Im, Chang Ouk Kim, Sehee Chung
The predominant sequential recommendation models are based on natural language processing models, such as the gated recurrent unit, that embed items in some defined space and grasp the user's long-term and short-term preferences based on the item embeddings.
1 code implementation • 31 Jul 2019 • Hoyeop Lee, Jinbae Im, Seongwon Jang, Hyunsouk Cho, Sehee Chung
This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items.