no code implementations • 31 Aug 2024 • Unggi Lee, Jiyeong Bae, Yeonji Jung, Minji Kang, Gyuri Byun, Yeonseo Lee, Dohee Kim, Sookbun Lee, Jaekwon Park, Taekyung Ahn, Gunho Lee, Hyeoncheol Kim
Knowledge Tracing (KT) is a critical component in online learning, but traditional approaches face limitations in interpretability and cross-domain adaptability.
no code implementations • 29 Aug 2024 • Younghwi Kim, Dohee Kim, Sunghyun Sim
Although attention-based models have limitations in representing the individual influences of each variable, these models can influence the temporal dependencies in time series prediction and the magnitude of the influence of individual variables.
no code implementations • 5 Jun 2024 • Unggi Lee, Jiyeong Bae, Dohee Kim, Sookbun Lee, Jaekwon Park, Taekyung Ahn, Gunho Lee, Damji Stratton, Hyeoncheol Kim
By leveraging the power of language models to capture semantic representations, LKT effectively incorporates textual information and significantly outperforms previous KT models on large benchmark datasets.
no code implementations • 17 Mar 2024 • Suyong Park, Duc Giap Nguyen, Jinrak Park, Dohee Kim, Jeong Soo Eo, Kyoungseok Han
This study presents a method for deep neural network nonlinear model predictive control (DNN-MPC) to reduce computational complexity, and we show its practical utility through its application in optimizing the energy management of hybrid electric vehicles (HEVs).
1 code implementation • 29 Jun 2023 • Yujin Kim, Dogyun Park, Dohee Kim, Suhyun Kim
We introduce NaturalInversion, a novel model inversion-based method to synthesize images that agrees well with the original data distribution without using real data.
no code implementations • 30 Nov 2022 • Sunghyun Sim, Dohee Kim, Hyerim Bae
However, because this approach is learned in an independent model for each component, it cannot learn the relationships between time-series components.
1 code implementation • 2 Jul 2022 • Jinwoo Hwang, Minsu Kim, Daeun Kim, Seungho Nam, Yoonsung Kim, Dohee Kim, Hardik Sharma, Jongse Park
This paper presents CoVA, a novel cascade architecture that splits the cascade computation between compressed domain and pixel domain to address the decoding bottleneck, supporting both temporal and spatial queries.