no code implementations • 19 Sep 2023 • Changyoon Lee, Junho Myung, Jieun Han, Jiho Jin, Alice Oh
To compare the learners' interaction and perception of the AI and human TAs, we conducted a between-subject study with 20 novice programming learners.
no code implementations • 31 Aug 2023 • Nayeon Lee, Chani Jung, Junho Myung, Jiho Jin, Jose Camacho-Collados, Juho Kim, Alice Oh
This confirms the utility of CREHate for constructing culturally sensitive hate speech classifiers.
no code implementations • 31 Jul 2023 • Jiho Jin, Jiseon Kim, Nayeon Lee, Haneul Yoo, Alice Oh, Hwaran Lee
In this paper, we present KoBBQ, a Korean bias benchmark dataset, and we propose a general framework that addresses considerations for cultural adaptation of a dataset.
1 code implementation • Findings (NAACL) 2022 • Haneul Yoo, Jiho Jin, Juhee Son, JinYeong Bak, Kyunghyun Cho, Alice Oh
Historical records in Korea before the 20th century were primarily written in Hanja, an extinct language based on Chinese characters and not understood by modern Korean or Chinese speakers.
1 code implementation • 1 Sep 2022 • Dongkwan Kim, Jiho Jin, Jaimeen Ahn, Alice Oh
Subgraphs are rich substructures in graphs, and their nodes and edges can be partially observed in real-world tasks.
no code implementations • 20 May 2022 • Juhee Son, Jiho Jin, Haneul Yoo, JinYeong Bak, Kyunghyun Cho, Alice Oh
Built on top of multilingual neural machine translation, H2KE learns to translate a historical document written in Hanja, from both a full dataset of outdated Korean translation and a small dataset of more recently translated contemporary Korean and English.
1 code implementation • Findings (ACL) 2022 • Yeon Seonwoo, Juhee Son, Jiho Jin, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh
These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models.
no code implementations • 29 Sep 2021 • Dongkwan Kim, Jiho Jin, Jaimeen Ahn, Alice Oh
Subgraphs are important substructures of graphs, but learning their representations has not been studied well.