Search Results for author: Yixin Yang

Found 6 papers, 0 papers with code

Learning Event Guided High Dynamic Range Video Reconstruction

no code implementations CVPR 2023 Yixin Yang, Jin Han, Jinxiu Liang, Imari Sato, Boxin Shi

Limited by the trade-off between frame rate and exposure time when capturing moving scenes with conventional cameras, frame based HDR video reconstruction suffers from scene-dependent exposure ratio balancing and ghosting artifacts.

Video Reconstruction

All-in-Focus Imaging From Event Focal Stack

no code implementations CVPR 2023 Hanyue Lou, Minggui Teng, Yixin Yang, Boxin Shi

Given an RGB image focused at an arbitrary distance, we explore the high temporal resolution of event streams, from which we automatically select refocusing timestamps and reconstruct corresponding refocused images with events to form a focal stack.


Coherent Event Guided Low-Light Video Enhancement

no code implementations ICCV 2023 Jinxiu Liang, Yixin Yang, Boyu Li, Peiqi Duan, Yong Xu, Boxin Shi

With frame-based cameras, capturing fast-moving scenes without suffering from blur often comes at the cost of low SNR and low contrast.

Video Enhancement

BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature Fusion for Deep Exemplar-based Video Colorization

no code implementations5 Dec 2022 Yixin Yang, Zhongzheng Peng, Xiaoyu Du, Zhulin Tao, Jinhui Tang, Jinshan Pan

To overcome this problem, we further develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process for better performance.

Colorization Semantic correspondence

EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-Resolution

no code implementations ICCV 2021 Jin Han, Yixin Yang, Chu Zhou, Chao Xu, Boxin Shi

To reconstruct high-resolution intensity images from event data, we propose EvIntSR-Net that converts event data to multiple latent intensity frames to achieve super-resolution on intensity images in this paper.


Cannot find the paper you are looking for? You can Submit a new open access paper.