1 code implementation • ICCV 2023 • Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, Jun-Yan Zhu
To address this, we propose DenseDiffusion, a training-free method that adapts a pre-trained text-to-image model to handle such dense captions while offering control over the scene layout.
1 code implementation • 5 Jun 2023 • Minjoon Jung, Youwon Jang, SeongHo Choi, Joochan Kim, Jin-Hwa Kim, Byoung-Tak Zhang
Video moment retrieval (VMR) identifies a specific moment in an untrimmed video for a given natural language query.
Ranked #2 on
Moment Retrieval
on Charades-STA
1 code implementation • 27 May 2023 • Deokjae Lee, JunYeong Lee, Jung-Woo Ha, Jin-Hwa Kim, Sang-Woo Lee, Hwaran Lee, Hyun Oh Song
To this end, we propose Bayesian red teaming (BRT), novel query-efficient black-box red teaming methods based on Bayesian optimization, which iteratively identify diverse positive test cases leading to model failures by utilizing the pre-defined user input pool and the past evaluations.
no code implementations • 11 Apr 2023 • Soohyun Kim, Junho Kim, Taekyung Kim, Hwan Heo, Seungryong Kim, Jiyoung Lee, Jin-Hwa Kim
This task is difficult due to the geometric distortion of panoramic images and the lack of a panoramic image dataset with diverse conditions, like weather or time.
no code implementations • ICCV 2023 • Jaewoong Lee, Sangwon Jang, Jaehyeong Jo, Jaehong Yoon, Yunji Kim, Jin-Hwa Kim, Jung-Woo Ha, Sung Ju Hwang
Token-based masked generative models are gaining popularity for their fast inference time with parallel decoding.
1 code implementation • 14 Mar 2023 • Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Jaehoon Ko, Hyeonsu Kim, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim
Text-to-3D generation has shown rapid progress in recent days with the advent of score distillation, a methodology of using pretrained text-to-2D diffusion models to optimize neural radiance field (NeRF) in the zero-shot setting.
1 code implementation • 23 Feb 2023 • Hyungyung Lee, Da Young Lee, Wonjae Kim, Jin-Hwa Kim, Tackeun Kim, Jihang Kim, Leonard Sunwoo, Edward Choi
We also find that view-specific special tokens can distinguish between different views and properly generate specific views even if they do not exist in the dataset, and utilizing multi-view chest X-rays can faithfully capture the abnormal findings in the additional X-rays.
1 code implementation • ICCV 2023 • Hyunsu Kim, Gayoung Lee, Yunjey Choi, Jin-Hwa Kim, Jun-Yan Zhu
Image blending aims to combine multiple images seamlessly.
no code implementations • 3 Feb 2023 • Hwan Heo, Taekyung Kim, Jiyoung Lee, Jaewon Lee, Soohyun Kim, Hyunwoo J. Kim, Jin-Hwa Kim
Multi-resolution hash encoding has recently been proposed to reduce the computational cost of neural renderings, such as NeRF.
no code implementations • 27 Jan 2023 • Sungdong Kim, Jin-Hwa Kim, Jiyoung Lee, Minjoon Seo
Efficient video-language modeling should consider the computational cost because of a large, sometimes intractable, number of video frames.
Ranked #6 on
Video Question Answering
on NExT-QA
1 code implementation • 4 Nov 2022 • Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, Byoung-Tak Zhang
Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.
1 code implementation • 23 Oct 2022 • Minjoon Jung, SeongHo Choi, Joochan Kim, Jin-Hwa Kim, Byoung-Tak Zhang
Video corpus moment retrieval (VCMR) is the task to retrieve the most relevant video moment from a large video corpus using a natural language query.
Ranked #2 on
Video Corpus Moment Retrieval
on TVR
no code implementations • 8 Oct 2022 • Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha, Nako Sung, Dongsoo Lee
To combine parameter-efficient adaptation and model compression, we propose AlphaTuning consisting of post-training quantization of the pre-trained language model and fine-tuning only some parts of quantized parameters for a target task.
1 code implementation • CVPR 2023 • Gi-Cheon Kang, Sungdong Kim, Jin-Hwa Kim, Donghyun Kwak, Byoung-Tak Zhang
As a result, GST scales the amount of training data up to an order of magnitude that of VisDial (1. 2M to 12. 9M QA data).
Conditional Text Generation
Out-of-Distribution Detection
+1
1 code implementation • 25 May 2022 • Jin-Hwa Kim, Yunji Kim, Jiyoung Lee, Kang Min Yoo, Sang-Woo Lee
Based on a recent trend that multimodal generative evaluations exploit a vison-and-language pre-trained model, we propose the negative Gaussian cross-mutual information using the CLIP features as a unified metric, coined by Mutual Information Divergence (MID).
Ranked #1 on
Human Judgment Classification
on Pascal-50S
Hallucination Pair-wise Detection (1-ref)
Hallucination Pair-wise Detection (4-ref)
+4
no code implementations • 11 May 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention.
2 code implementations • 1 Feb 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.
3 code implementations • 31 May 2021 • Jin-Hwa Kim, Do-Hyeong Kim, Saehoon Yi, Taehoon Lee
We present the efficiency of semi-orthogonal embedding for unsupervised anomaly segmentation.
Ranked #1 on
Unsupervised Anomaly Detection
on KolektorSDD
(using extra training data)
no code implementations • 1 Jan 2021 • Duhyeon Bang, Yunho Jeon, Jin-Hwa Kim, Jiwon Kim, Hyunjung Shim
When a person identifies objects, he or she can think by associating objects to many classes and conclude by taking inter-class relations into account.
no code implementations • 8 Jun 2020 • Jin-Hwa Kim, Junyoung Park, Yongseok Choi
To validate our method, we experiment on meta-transfer learning and few-shot learning tasks for multiple settings.
no code implementations • LREC 2020 • Jin-Hwa Kim, Yoon Jo Kim, Mitra Behzadi, Ian G. Harris
The task is divided into two stages, 1) the classification of each message, and 2) the classification of the entire conversation.
1 code implementation • Findings (EMNLP) 2021 • Gi-Cheon Kang, Junseok Park, Hwaran Lee, Byoung-Tak Zhang, Jin-Hwa Kim
Visual dialog is a task of answering a sequence of questions grounded in an image using the previous dialog history as context.
no code implementations • ECCV 2018 • Kyung-Min Kim, Seong-Ho Choi, Jin-Hwa Kim, Byoung-Tak Zhang
We confirm the best performance of the dual attention mechanism combined with late fusion by ablation studies.
8 code implementations • NeurIPS 2018 • Jin-Hwa Kim, Jaehyun Jun, Byoung-Tak Zhang
In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly.
Ranked #11 on
Phrase Grounding
on Flickr30k Entities Test
no code implementations • 18 Dec 2017 • Jin-Hwa Kim, Byoung-Tak Zhang
Kim et al. (2016) show that the Hadamard product in multimodal deep networks, which is well-known for the joint function of visual question answering tasks, implicitly performs an attentional mechanism for visual inputs.
2 code implementations • ACL 2019 • Jin-Hwa Kim, Nikita Kitaev, Xinlei Chen, Marcus Rohrbach, Byoung-Tak Zhang, Yuandong Tian, Dhruv Batra, Devi Parikh
The game involves two players: a Teller and a Drawer.
1 code implementation • NeurIPS 2017 • Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, Byoung-Tak Zhang
Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task.
7 code implementations • 14 Oct 2016 • Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang
Bilinear models provide rich representations compared with linear models.
1 code implementation • NeurIPS 2016 • Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang
We present Multimodal Residual Networks (MRN) for the multimodal residual learning of visual question-answering, which extends the idea of the deep residual learning.
1 code implementation • 24 Nov 2015 • Nicholas Léonard, Sagar Waghmare, Yang Wang, Jin-Hwa Kim
The rnn package provides components for implementing a wide range of Recurrent Neural Networks.