Search Results for author: Donghoon Han

Found 5 papers, 2 papers with code

Unleash the Potential of CLIP for Video Highlight Detection

no code implementations2 Apr 2024 Donghoon Han, Seunghyeon Seo, Eunhwan Park, Seong-Uk Nam, Nojun Kwak

Multimodal and large language models (LLMs) have revolutionized the utilization of open-world knowledge, unlocking novel potentials across various tasks and applications.

Highlight Detection

ConcatPlexer: Additional Dim1 Batching for Faster ViTs

no code implementations22 Aug 2023 Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak

Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications.

MixNeRF: Modeling a Ray with Mixture Density for Novel View Synthesis from Sparse Inputs

1 code implementation CVPR 2023 Seunghyeon Seo, Donghoon Han, Yeonjin Chang, Nojun Kwak

In this work, we propose MixNeRF, an effective training strategy for novel view synthesis from sparse inputs by modeling a ray with a mixture density model.

Depth Estimation Novel View Synthesis +1

Few-shot Image Generation with Mixup-based Distance Learning

3 code implementations23 Nov 2021 Chaerin Kong, Jeesoo Kim, Donghoon Han, Nojun Kwak

Producing diverse and realistic images with generative models such as GANs typically requires large scale training with vast amount of images.

Image Generation

The U-Net based GLOW for Optical-Flow-free Video Interframe Generation

no code implementations17 Mar 2021 Saem Park, Donghoon Han, Nojun Kwak

Through experiments, we \sam {confirmed the feasibility of the proposed algorithm and would like to suggest the U-Net based Generative Flow as a new possibility for baseline in video frame interpolation.

Occlusion Handling Optical Flow Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.