Search Results for author: Junyoung Seo

Found 11 papers, 8 papers with code

Pose-dIVE: Pose-Diversified Augmentation with Diffusion Model for Person Re-Identification

no code implementations23 Jun 2024 Inès Hyeonsu Kim, Joungbin Lee, Woojeong Jin, Soowon Son, Kyusun Cho, Junyoung Seo, Min-Seop Kwak, Seokju Cho, JeongYeol Baek, Byeongwon Lee, Seungryong Kim

Person re-identification (Re-ID) often faces challenges due to variations in human poses and camera viewpoints, which significantly affect the appearance of individuals across images.

Data Augmentation Diversity +1

GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping

1 code implementation27 May 2024 Junyoung Seo, Kazumi Fukuda, Takashi Shibuya, Takuya Narihira, Naoki Murata, Shoukang Hu, Chieh-Hsin Lai, Seungryong Kim, Yuki Mitsufuji

In these methods, an input view is geometrically warped to novel views with estimated depth maps, then the warped image is inpainted by T2I models.

Diversity Monocular Depth Estimation +1

Retrieval-Augmented Score Distillation for Text-to-3D Generation

1 code implementation5 Feb 2024 Junyoung Seo, Susung Hong, Wooseok Jang, Inès Hyeonsu Kim, Minseop Kwak, Doyup Lee, Seungryong Kim

We leverage the retrieved asset to incorporate its geometric prior in the variational objective and adapt the diffusion model's 2D prior toward view consistency, achieving drastic improvements in both geometry and fidelity of generated scenes.

3D Generation 3D geometry +3

DirecT2V: Large Language Models are Frame-Level Directors for Zero-Shot Text-to-Video Generation

1 code implementation23 May 2023 Susung Hong, Junyoung Seo, Heeseong Shin, Sunghwan Hong, Seungryong Kim

In the paradigm of AI-generated content (AIGC), there has been increasing attention to transferring knowledge from pre-trained text-to-image (T2I) models to text-to-video (T2V) generation.

Text-to-Video Generation Video Generation +1

Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation

1 code implementation14 Mar 2023 Junyoung Seo, Wooseok Jang, Min-Seop Kwak, Hyeonsu Kim, Jaehoon Ko, Junho Kim, Jin-Hwa Kim, Jiyoung Lee, Seungryong Kim

Text-to-3D generation has shown rapid progress in recent days with the advent of score distillation, a methodology of using pretrained text-to-2D diffusion models to optimize neural radiance field (NeRF) in the zero-shot setting.

3D Generation Single-View 3D Reconstruction +1

DiffFace: Diffusion-based Face Swapping with Facial Guidance

1 code implementation27 Dec 2022 Kihong Kim, Yunho Kim, Seokju Cho, Junyoung Seo, Jisu Nam, Kychul Lee, Seungryong Kim, Kwanghee Lee

In this paper, we propose a diffusion-based face swapping framework for the first time, called DiffFace, composed of training ID conditional DDPM, sampling with facial guidance, and a target-preserving blending.

Face Swapping

MIDMs: Matching Interleaved Diffusion Models for Exemplar-based Image Translation

1 code implementation22 Sep 2022 Junyoung Seo, Gyuseong Lee, Seokju Cho, Jiyoung Lee, Seungryong Kim

Specifically, we formulate a diffusion-based matching-and-generation framework that interleaves cross-domain matching and diffusion steps in the latent space by iteratively feeding the intermediate warp into the noising process and denoising it to generate a translated image.

Denoising Translation

ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency Regularization

1 code implementation18 Aug 2022 Jiwon Kim, Youngjo Min, Daehwan Kim, Gyuseong Lee, Junyoung Seo, Kwangrok Ryoo, Seungryong Kim

We present a novel semi-supervised learning framework that intelligently leverages the consistency regularization between the model's predictions from two strongly-augmented views of an image, weighted by a confidence of pseudo-label, dubbed ConMatch.

Pseudo Label

Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels

no code implementations CVPR 2022 Jiwon Kim, Kwangrok Ryoo, Junyoung Seo, Gyuseong Lee, Daehwan Kim, Hansang Cho, Seungryong Kim

In this paper, we present a simple, but effective solution for semantic correspondence that learns the networks in a semi-supervised manner by supplementing few ground-truth correspondences via utilization of a large amount of confident correspondences as pseudo-labels, called SemiMatch.

Data Augmentation Semantic correspondence +1

AggMatch: Aggregating Pseudo Labels for Semi-Supervised Learning

no code implementations25 Jan 2022 Jiwon Kim, Kwangrok Ryoo, Gyuseong Lee, Seokju Cho, Junyoung Seo, Daehwan Kim, Hansang Cho, Seungryong Kim

In this paper, we address this limitation with a novel SSL framework for aggregating pseudo labels, called AggMatch, which refines initial pseudo labels by using different confident instances.

Pseudo Label

Cannot find the paper you are looking for? You can Submit a new open access paper.