1 code implementation • CCGPK (COLING) 2022 • Young-Jun Lee, Chae-Gyun Lim, Yunsu Choi, Ji-Hui Lm, Ho-Jin Choi
However, since this dataset is frozen in 2018, the dialogue agents trained on this dataset would not know how to interact with a human who loves “Wandavision.” One way to alleviate this problem is to create a large-scale dataset.
1 code implementation • COLING 2022 • Young-Jun Lee, Chae-Gyun Lim, Ho-Jin Choi
Although several studies have investigated few-shot in-context learning for empathetic dialogue generation, an in-depth analysis of the generation of empathetic dialogue with in-context learning remains unclear, especially in GPT-3 (Brown et al., 2020).
no code implementations • LREC 2020 • Young-Jun Lee, Chae-Gyun Lim, Ho-Jin Choi
In order to construct our dataset, we used a large-scale sentiment movie review corpus as the unlabeled dataset.
no code implementations • LREC 2016 • Young-Seob Jeong, Won-Tae Joo, Hyun-Woo Do, Chae-Gyun Lim, Key-Sun Choi, Ho-Jin Choi
Before developing the system, it first necessary to define or design the structure of temporal information.