1 code implementation • 13 Nov 2024 • Suhyeok Jang, Seojin Kim, Jinwoo Shin, Jongheon Jeong
We also find that such a fine-tuning can be done by updating a small fraction of parameters of the classifier.
1 code implementation • 29 Jun 2024 • Sheo Yon Jhin, Seojin Kim, Noseong Park
In addition, we demonstrate the low prediction delay of our method in a variety of datasets.
1 code implementation • 5 May 2024 • Seojin Kim, Jaehyun Nam, Sihyun Yu, Younghoon Shin, Jinwoo Shin
Compared to the conventional textual inversion method in the image domain using a single-level token embedding, our multi-level token embeddings allow the model to effectively learn the underlying low-shot molecule distribution.
no code implementations • 12 Nov 2023 • Yujin Cho, Mingeon Kim, Seojin Kim, Oyun Kwon, Ryan Donghan Kwon, Yoonha Lee, Dohyun Lim
This study investigates the efficacy of Large Language Models (LLMs) in interactive language therapy for high-functioning autistic adolescents.
no code implementations • 8 Nov 2023 • Seonkyu Lim, Jaehyeon Park, Seojin Kim, Hyowon Wi, Haksoo Lim, Jinsung Jeon, Jeongwhan Choi, Noseong Park
Long-term time series forecasting (LTSF) is a challenging task that has been investigated in various domains such as finance investment, health care, traffic, and weather forecasting.
1 code implementation • 18 Dec 2022 • Jongheon Jeong, Seojin Kim, Jinwoo Shin
Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i. e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs.