Search Results for author: Junhyeong Cho

Found 5 papers, 3 papers with code

Object-Centric Domain Randomization for 3D Shape Reconstruction in the Wild

no code implementations21 Mar 2024 Junhyeong Cho, Kim Youwang, Hunmin Yang, Tae-Hyun Oh

One of the biggest challenges in single-view 3D shape reconstruction in the wild is the scarcity of <3D shape, 2D image>-paired data from real-world environments.

3D Shape Reconstruction Object

PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization

no code implementations ICCV 2023 Junhyeong Cho, Gilhyun Nam, Sungyeon Kim, Hunmin Yang, Suha Kwak

In a joint vision-language space, a text feature (e. g., from "a photo of a dog") could effectively represent its relevant image features (e. g., from dog photos).

Image Classification Multi-modal Classification +5

Cross-Attention of Disentangled Modalities for 3D Human Mesh Recovery with Transformers

1 code implementation27 Jul 2022 Junhyeong Cho, Kim Youwang, Tae-Hyun Oh

Transformer encoder architectures have recently achieved state-of-the-art results on monocular 3D human mesh reconstruction, but they require a substantial number of parameters and expensive computations.

3D Hand Pose Estimation 3D Reconstruction

Collaborative Transformers for Grounded Situation Recognition

3 code implementations CVPR 2022 Junhyeong Cho, Youngseok Yoon, Suha Kwak

To implement this idea, we propose Collaborative Glance-Gaze TransFormer (CoFormer) that consists of two modules: Glance transformer for activity classification and Gaze transformer for entity estimation.

Grounded Situation Recognition Image Classification +4

Grounded Situation Recognition with Transformers

1 code implementation19 Nov 2021 Junhyeong Cho, Youngseok Yoon, Hyeonjun Lee, Suha Kwak

Grounded Situation Recognition (GSR) is the task that not only classifies a salient action (verb), but also predicts entities (nouns) associated with semantic roles and their locations in the given image.

Grounded Situation Recognition Image Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.