no code implementations • COLING (PEOPLES) 2020 • Jonggu Kim, Hyeonmok Ko, Seoha Song, Saebom Jang, Jiyeon Hong
We firstly use ELECTRA which is a state-of-the-art pretrained language model and validate the performance on emotion recognition in conversations.
no code implementations • 25 Feb 2025 • Junhyun Lee, Harshith Goka, Hyeonmok Ko
Hallucination refers to the inaccurate, irrelevant, and inconsistent text generated from large language models (LLMs).
no code implementations • 18 Dec 2024 • Seoha Song, Junhyun Lee, Hyeonmok Ko
We demonstrate this by finetuning four different LLMs with Hansel and show that the mean absolute error of the output sequence decreases significantly in every model and dataset compared to the prompt-based length control finetuning.
no code implementations • 18 Jan 2023 • Hyungtak Choi, Hyeonmok Ko, Gurpreet Kaur, Lohith Ravuru, Kiranmayi Gandikota, Manisha Jhawar, Simma Dharani, Pranamya Patil
Our evaluations of the dialogue datasets between users that plan a schedule show that our model outperforms the baseline model.