Search Results for author: Shuo-Yen Lin

Found 3 papers, 1 papers with code

MeDM: Mediating Image Diffusion Models for Video-to-Video Translation with Temporal Correspondence Guidance

1 code implementation19 Aug 2023 Ernie Chu, Tzuhsuan Huang, Shuo-Yen Lin, Jun-Cheng Chen

This study introduces an efficient and effective method, MeDM, that utilizes pre-trained image Diffusion Models for video-to-video translation with consistent temporal flow.

Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based on Diffusion Model for Object Detector

no code implementations16 Jul 2023 Shuo-Yen Lin, Ernie Chu, Che-Hsien Lin, Jun-Cheng Chen, Jia-Ching Wang

To the best of our knowledge, we are the first to propose DM-based naturalistic adversarial patch generation for object detectors.

Video ControlNet: Towards Temporally Consistent Synthetic-to-Real Video Translation Using Conditional Image Diffusion Models

no code implementations30 May 2023 Ernie Chu, Shuo-Yen Lin, Jun-Cheng Chen

To the best of our knowledge, our proposed method is the first to accomplish diverse and temporally consistent synthetic-to-real video translation using conditional image diffusion models.

Optical Flow Estimation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.