Search Results for author: Yixin Yang

Found 11 papers, 4 papers with code

E2VIDiff: Perceptual Events-to-Video Reconstruction using Diffusion Priors

no code implementations11 Jul 2024 Jinxiu Liang, Bohan Yu, Yixin Yang, Yiming Han, Boxin Shi

Event cameras, mimicking the human retina, capture brightness changes with unparalleled temporal resolution and dynamic range.

Image Generation Video Generation +1

ColorMNet: A Memory-based Deep Spatial-Temporal Feature Propagation Network for Video Colorization

1 code implementation9 Apr 2024 Yixin Yang, Jiangxin Dong, Jinhui Tang, Jinshan Pan

To explore this property for better spatial and temporal feature utilization, we develop a local attention module to aggregate the features from adjacent frames in a spatial-temporal neighborhood.

Colorization

Can Large Multimodal Models Uncover Deep Semantics Behind Images?

1 code implementation17 Feb 2024 Yixin Yang, Zheng Li, Qingxiu Dong, Heming Xia, Zhifang Sui

Understanding the deep semantics of images is essential in the era dominated by social media.

Latency Correction for Event-guided Deblurring and Frame Interpolation

no code implementations CVPR 2024 Yixin Yang, Jinxiu Liang, Bohan Yu, Yan Chen, Jimmy S. Ren, Boxin Shi

Event cameras with their high temporal resolution dynamic range and low power consumption are particularly good at time-sensitive applications like deblurring and frame interpolation.

Deblurring

EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms with Real-captured Hybrid Dataset

no code implementations13 Dec 2023 Peiqi Duan, Boyu Li, Yixin Yang, Hanyue Lou, Minggui Teng, Yi Ma, Boxin Shi

Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.

Benchmarking Deblurring +6

Learning Event Guided High Dynamic Range Video Reconstruction

1 code implementation CVPR 2023 Yixin Yang, Jin Han, Jinxiu Liang, Imari Sato, Boxin Shi

Limited by the trade-off between frame rate and exposure time when capturing moving scenes with conventional cameras, frame based HDR video reconstruction suffers from scene-dependent exposure ratio balancing and ghosting artifacts.

Video Reconstruction

All-in-Focus Imaging From Event Focal Stack

no code implementations CVPR 2023 Hanyue Lou, Minggui Teng, Yixin Yang, Boxin Shi

Given an RGB image focused at an arbitrary distance, we explore the high temporal resolution of event streams, from which we automatically select refocusing timestamps and reconstruct corresponding refocused images with events to form a focal stack.

Deblurring

Coherent Event Guided Low-Light Video Enhancement

1 code implementation ICCV 2023 Jinxiu Liang, Yixin Yang, Boyu Li, Peiqi Duan, Yong Xu, Boxin Shi

With frame-based cameras, capturing fast-moving scenes without suffering from blur often comes at the cost of low SNR and low contrast.

Video Enhancement

BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature Fusion for Deep Exemplar-based Video Colorization

no code implementations5 Dec 2022 Yixin Yang, Zhongzheng Peng, Xiaoyu Du, Zhulin Tao, Jinhui Tang, Jinshan Pan

To overcome this problem, we further develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process for better performance.

Colorization Semantic correspondence

EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-Resolution

no code implementations ICCV 2021 Jin Han, Yixin Yang, Chu Zhou, Chao Xu, Boxin Shi

To reconstruct high-resolution intensity images from event data, we propose EvIntSR-Net that converts event data to multiple latent intensity frames to achieve super-resolution on intensity images in this paper.

Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.