Search Results for author: Junbum Cha

Found 12 papers, 9 papers with code

Honeybee: Locality-enhanced Projector for Multimodal LLM

1 code implementation11 Dec 2023 Junbum Cha, Wooyoung Kang, Jonghwan Mun, Byungseok Roh

In Multimodal Large Language Models (MLLMs), a visual projector plays a crucial role in bridging pre-trained vision encoders with LLMs, enabling profound visual understanding while harnessing the LLMs' robust capabilities.

 Ranked #1 on Science Question Answering on ScienceQA (using extra training data)

Science Question Answering

Learning Pseudo-Labeler beyond Noun Concepts for Open-Vocabulary Object Detection

no code implementations4 Dec 2023 Sunghun Kang, Junbum Cha, Jonghwan Mun, Byungseok Roh, Chang D. Yoo

Specifically, the proposed method aims to learn arbitrary image-to-text mapping for pseudo-labeling of arbitrary concepts, named Pseudo-Labeling for Arbitrary Concepts (PLAC).

object-detection Open Vocabulary Object Detection +2

Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs

1 code implementation CVPR 2023 Junbum Cha, Jonghwan Mun, Byungseok Roh

Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts and transferring the learned image-level understanding to the segmentation task.

Contrastive Learning Open Vocabulary Semantic Segmentation +4

Domain Generalization by Mutual-Information Regularization with Pre-trained Models

1 code implementation21 Mar 2022 Junbum Cha, Kyungjae Lee, Sungrae Park, Sanghyuk Chun

Domain generalization (DG) aims to learn a generalized model to an unseen target domain using only limited source domains.

Domain Generalization

Few-shot Font Generation with Weakly Supervised Localized Representations

2 code implementations22 Dec 2021 Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim

Existing methods learn to disentangle style and content elements by developing a universal style representation for each font style.

Font Generation

Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts

4 code implementations ICCV 2021 Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim

MX-Font extracts multiple style features not explicitly conditioned on component labels, but automatically by multiple experts to represent different local concepts, e. g., left-side sub-glyph.

Disentanglement Font Generation +1

Few-shot Font Generation with Localized Style Representations and Factorization

3 code implementations23 Sep 2020 Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim

However, learning component-wise styles solely from reference glyphs is infeasible in the few-shot font generation scenario, when a target script has a large number of components, e. g., over 200 for Chinese.

Font Generation

Few-shot Compositional Font Generation with Dual Memory

3 code implementations ECCV 2020 Junbum Cha, Sanghyuk Chun, Gayoung Lee, Bado Lee, Seonghyeon Kim, Hwalsuk Lee

By utilizing the compositionality of compositional scripts, we propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font), which enables us to generate a high-quality font library with only a few samples.

Font Generation

Group-Transformer: Towards A Lightweight Character-level Language Model

no code implementations25 Sep 2019 Sungrae Park, Geewook Kim, Junyeop Lee, Junbum Cha, Ji-Hoon Kim Hwalsuk Lee

When compared to Transformers with a comparable number of parameters and time complexity, the proposed model shows better performance.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.