1 code implementation • 2 May 2024 • Minjin Choi, Hye-Young Kim, Hyunsouk Cho, Jongwuk Lee
Session-based recommendation (SBR) aims to predict the following item a user will interact with during an ongoing session.
1 code implementation • 6 Nov 2023 • Sunkyung Lee, Minjin Choi, Jongwuk Lee
For training, GLEN effectively exploits a dynamic lexical identifier using a two-phase index learning strategy, enabling it to learn meaningful lexical identifiers and relevance signals between queries and documents.
1 code implementation • 22 May 2023 • Hye-Young Kim, Minjin Choi, Sunkyung Lee, Eunseong Choi, Young-In Song, Jongwuk Lee
One extracts core terms from an original query at the term level, and the other determines whether a sub-query is a suitable reduction for the original query at the sequence level.
2 code implementations • 13 Sep 2022 • Eunseong Choi, Sunkyung Lee, Minjin Choi, Hyeseon Ko, Young-In Song, Jongwuk Lee
Sparse document representations have been widely used to retrieve relevant documents via exact lexical matching.
1 code implementation • 4 Jan 2022 • Minjin Choi, jinhong Kim, Joonsek Lee, Hyunjung Shim, Jongwuk Lee
Session-based recommendation (SR) predicts the next items from a sequence of previous items consumed by an anonymous user.
1 code implementation • NAACL 2021 • Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon Lee, Jongwuk Lee
Automated metaphor detection is a challenging task to identify metaphorical expressions of words in a sentence.
2 code implementations • 30 Mar 2021 • Minjin Choi, Yoonki Jeong, Joonseok Lee, Jongwuk Lee
Top-N recommendation is a challenging problem because complex and sparse user-item interactions should be adequately addressed to achieve high-quality recommendation results.
3 code implementations • 30 Mar 2021 • Minjin Choi, jinhong Kim, Joonseok Lee, Hyunjung Shim, Jongwuk Lee
Session-based recommendation aims at predicting the next item given a sequence of previous items consumed in the session, e. g., on e-commerce or multimedia streaming services.
no code implementations • 13 Nov 2019 • Jae-woong Lee, Minjin Choi, Jongwuk Lee, Hyunjung Shim
Knowledge distillation (KD) is a well-known method to reduce inference latency by compressing a cumbersome teacher model to a small student model.