1 code implementation • CCGPK (COLING) 2022 • Young-Jun Lee, Chae-Gyun Lim, Yunsu Choi, Ji-Hui Lm, Ho-Jin Choi
However, since this dataset is frozen in 2018, the dialogue agents trained on this dataset would not know how to interact with a human who loves “Wandavision.” One way to alleviate this problem is to create a large-scale dataset.
1 code implementation • COLING 2022 • Young-Jun Lee, Chae-Gyun Lim, Ho-Jin Choi
Although several studies have investigated few-shot in-context learning for empathetic dialogue generation, an in-depth analysis of the generation of empathetic dialogue with in-context learning remains unclear, especially in GPT-3 (Brown et al., 2020).
1 code implementation • 23 Oct 2023 • Young-Jun Lee, Jonghwan Hyeon, Ho-Jin Choi
To our knowledge, this is the first study to assess the image-sharing ability of LLMs in a zero-shot setting without visual foundation models.
no code implementations • 13 Oct 2023 • Jinwoo Kim, Janghyuk Choi, Jaehyun Kang, Changyeon Lee, Ho-Jin Choi, Seon Joo Kim
The binding problem in artificial neural networks is actively explored with the goal of achieving human-level recognition skills through the comprehension of the world in terms of symbol-like entities.
1 code implementation • CVPR 2023 • Jinwoo Kim, Janghyuk Choi, Ho-Jin Choi, Seon Joo Kim
Object-centric learning (OCL) aspires general and compositional understanding of scenes by representing a scene as a collection of object-centric representations.
no code implementations • 8 Dec 2022 • Young-Jun Lee, Byungsoo Ko, Han-Gyu Kim, Ho-Jin Choi
As sharing images in an instant message is a crucial factor, there has been active research on learning a image-text multi-modal dialogue model.
no code implementations • 31 Oct 2022 • Nyoungwoo Lee, ChaeHun Park, Ho-Jin Choi, Jaegul Choo
To overcome these limitations, this paper proposes a simple but efficient method for generating adversarial negative responses leveraging a large-scale language model.
no code implementations • 26 Oct 2022 • Yeongmin Kim, Huiwon Jang, DongKeon Lee, Ho-Jin Choi
To break through these observations, we propose a simple solution AltUB which introduces alternating training to update the base distribution of normalizing flow for anomaly detection.
Ranked #2 on Anomaly Detection on BTAD (using extra training data)
no code implementations • 14 Sep 2022 • Bum Chul Kwon, Jungsoo Lee, Chaeyeon Chung, Nyoungwoo Lee, Ho-Jin Choi, Jaegul Choo
We call the unwanted correlations "data biases," and the visual features causing data biases "bias factors."
no code implementations • 1 Sep 2021 • Nyoungwoo Lee, ChaeHun Park, Ho-Jin Choi
In open-domain dialogues, predictive uncertainties are mainly evaluated in a domain shift setting to cope with out-of-distribution inputs.
1 code implementation • ACL 2021 • Nyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi, Sung-Hyun Myaeng
In multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation.
no code implementations • LREC 2020 • Young-Jun Lee, Chae-Gyun Lim, Ho-Jin Choi
In order to construct our dataset, we used a large-scale sentiment movie review corpus as the unlabeled dataset.
1 code implementation • 6 Jan 2020 • Giang Nguyen, Shuan Chen, Thao Do, Tae Joon Jun, Ho-Jin Choi, Daeyoung Kim
Interpreting the behaviors of Deep Neural Networks (usually considered as a black box) is critical especially when they are now being widely adopted over diverse aspects of human life.
no code implementations • LREC 2016 • Young-Seob Jeong, Won-Tae Joo, Hyun-Woo Do, Chae-Gyun Lim, Key-Sun Choi, Ho-Jin Choi
Before developing the system, it first necessary to define or design the structure of temporal information.