Search Results for author: Yawen Lu

Found 6 papers, 1 papers with code

ProMotion: Prototypes As Motion Learners

no code implementations CVPR 2024 Yawen Lu, Dongfang Liu, Qifan Wang, Cheng Han, Yiming Cui, Zhiwen Cao, Xueling Zhang, Yingjie Victor Chen, Heng Fan

We capitalize on a dual mechanism involving the feature denoiser and the prototypical learner to decipher the intricacies of motion.

Prototypical Transformer as Unified Motion Learners

no code implementations3 Jun 2024 Cheng Han, Yawen Lu, Guohao Sun, James C. Liang, Zhiwen Cao, Qifan Wang, Qiang Guan, Sohail A. Dianat, Raghuveer M. Rao, Tong Geng, Zhiqiang Tao, Dongfang Liu

In this work, we introduce the Prototypical Transformer (ProtoFormer), a general and unified framework that approaches various motion tasks from a prototype perspective.

Object Tracking Representation Learning +1

TransFlow: Transformer as Flow Learner

no code implementations CVPR 2023 Yawen Lu, Qifan Wang, Siqi Ma, Tong Geng, Yingjie Victor Chen, Huaijin Chen, Dongfang Liu

Optical flow is an indispensable building block for various important computer vision tasks, including motion estimation, object tracking, and disparity measurement.

Motion Estimation object-detection +4

Unsupervised Simultaneous Depth-from-defocus and Depth-from-focus

no code implementations1 Jan 2021 Yawen Lu, Guoyu Lu

The proposed network is able to learn optimal depth mapping from the information contained in the blurring of a single image, generate a simulated image focal stack and all-in-focus image, and train a depth estimator from an image focal stack.

Depth Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.