Search Results for author: Yunfan Lu

Found 15 papers, 8 papers with code

EvLight++: Low-Light Video Enhancement with an Event Camera: A Large-Scale Real-World Dataset, Novel Method, and More

no code implementations29 Aug 2024 Kanghao Chen, Guoqiang Liang, Hangyu Li, Yunfan Lu, Lin Wang

This dataset was curated using a robotic arm that traces a consistent non-linear trajectory, achieving spatial alignment precision under 0. 03mm and temporal alignment with errors under 0. 01s for 90% of the dataset.

feature selection Monocular Depth Estimation +2

Revisit Event Generation Model: Self-Supervised Learning of Event-to-Video Reconstruction with Implicit Neural Representations

no code implementations26 Jul 2024 Zipeng Wang, Yunfan Lu, Lin Wang

Reconstructing intensity frames from event data while maintaining high temporal resolution and dynamic range is crucial for bridging the gap between event-based and frame-based computer vision.

Optical Flow Estimation Self-Supervised Learning +1

BiGym: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark

1 code implementation10 Jul 2024 Nikita Chernyadev, Nicholas Backshall, Xiao Ma, Yunfan Lu, Younggyo Seo, Stephen James

To validate the usability of BiGym, we thoroughly benchmark the state-of-the-art imitation learning algorithms and demo-driven reinforcement learning algorithms within the environment and discuss the future opportunities.

Imitation Learning

HR-INR: Continuous Space-Time Video Super-Resolution via Event Camera

no code implementations22 May 2024 Yunfan Lu, Zipeng Wang, Yusheng Wang, Hui Xiong

However, the highly ill-posed nature of C-STVSR limits the effectiveness of current INR-based methods: they assume linear motion between frames and use interpolation or feature warping to generate features at arbitrary spatiotemporal positions with two consecutive frames.

Space-time Video Super-resolution Video Restoration +1

Event Camera Demosaicing via Swin Transformer and Pixel-focus Loss

1 code implementation3 Apr 2024 Yunfan Lu, Yijie Xu, Wenzong Ma, Weiyu Guo, Hui Xiong

To end this, we present a Swin-Transformer-based backbone and a pixel-focus loss function for demosaicing with missing pixel values in RAW domain processing.

Demosaicking

Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames

no code implementations27 Jun 2023 Yunfan Lu, Guoqiang Liang, Lin Wang

Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames.

Self-Supervised Learning Video Frame Interpolation

UniINR: Event-guided Unified Rolling Shutter Correction, Deblurring, and Interpolation

2 code implementations24 May 2023 Yunfan Lu, Guoqiang Liang, Yusheng Wang, Lin Wang, Hui Xiong

To query a specific sharp frame (GS or RS), we embed the exposure time into STR and decode the embedded features pixel-by-pixel to recover a sharp frame.

Deblurring Image Restoration +1

Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks

1 code implementation17 Feb 2023 Xu Zheng, Yexin Liu, Yunfan Lu, Tongyan Hua, Tianbo Pan, Weiming Zhang, DaCheng Tao, Lin Wang

Event cameras are bio-inspired sensors that capture the per-pixel intensity changes asynchronously and produce event streams encoding the time, pixel position, and polarity (sign) of the intensity changes.

Deblurring Deep Learning +6

Priors in Deep Image Restoration and Enhancement: A Survey

1 code implementation4 Jun 2022 Yunfan Lu, Yiqi Lin, Hao Wu, Yunhao Luo, Xu Zheng, Hui Xiong, Lin Wang

Image restoration and enhancement is a process of improving the image quality by removing degradations, such as noise, blur, and resolution degradation.

Image Restoration Survey

INVIGORATE: Interactive Visual Grounding and Grasping in Clutter

no code implementations25 Aug 2021 Hanbo Zhang, Yunfan Lu, Cunjun Yu, David Hsu, Xuguang Lan, Nanning Zheng

This paper presents INVIGORATE, a robot system that interacts with human through natural language and grasps a specified object in clutter.

Blocking Object +5

Ab Initio Particle-based Object Manipulation

no code implementations19 Jul 2021 Siwei Chen, Xiao Ma, Yunfan Lu, David Hsu

Like the model-based analytic approaches to manipulation, the particle representation enables the robot to reason about the object's geometry and dynamics in order to choose suitable manipulation actions.

Object Robot Manipulation

Cannot find the paper you are looking for? You can Submit a new open access paper.