no code implementations • 9 Feb 2024 • Unggi Lee, Minji Jeon, Yunseo Lee, Gyuri Byun, Yoorim Son, Jaeyoon Shin, Hongkyu Ko, Hyeoncheol Kim
This study explores the application of multi-modal large language models (MLLMs) in art appreciation education, focusing on developing LLaVA-Docent, a model that leverages these advancements.
no code implementations • 19 Dec 2023 • Unggi Lee, SungJun Yoon, Joon Seo Yun, KyoungSoo Park, YoungHoon Jung, Damji Stratton, Hyeoncheol Kim
This paper presents novel techniques for enhancing the performance of knowledge tracing (KT) models by focusing on the crucial factor of question and concept difficulty level.
1 code implementation • 19 Aug 2022 • Unggi Lee, Yonghyun Park, Yujin Kim, Seongyune Choi, Hyeoncheol Kim
Models that consider both interpretability and the performance improvement have been insufficient.
no code implementations • 4 Nov 2021 • Seokjun Kim, Jaeeun Jang, Hyeoncheol Kim
In this paper, we introduce an imagine network that can simulate itself through artificial association networks.
no code implementations • 3 Nov 2021 • Seokjun Kim, Jaeeun Jang, Yeonju Jang, Seongyune Choi, Hyeoncheol Kim
We introduce memory association networks(MANs) that memorize and remember any data.
no code implementations • 2 Nov 2021 • Seokjun Kim, Jaeeun Jang, Hyeoncheol Kim
we introduce deductive association networks(DANs), a network that performs deductive reasoning.
no code implementations • 31 Oct 2021 • Seokjun Kim, Jaeeun Jang, Hyeoncheol Kim
Further, we propose a new neural data structure that can express all basic models of existing neural networks in a tree structure.
no code implementations • 29 Sep 2021 • Seokjun Kim, Jaeeun Jang, Heeseok Jung, Hyeoncheol Kim
Instead of using fixed sequence layers, we create a GT for each data and train GTNN according to the tree's structure.
no code implementations • 29 Sep 2021 • Jaeeun Jang, Seokjun Kim, Hyeoncheol Kim
To understand the internal behaviors of convolution neural networks (CNNs), many class activation mapping (CAM) based methods, which generate an explanation map by a linear combination of channels and corresponding weights, have been proposed.
no code implementations • SEMEVAL 2017 • Joosung Yoon, Kigon Lyu, Hyeoncheol Kim
We propose a sentiment analyzer for the prediction of document-level sentiments of English micro-blog messages from Twitter.