no code implementations • 7 Apr 2024 • Qiaole Dong, Yanwei Fu
To this end, we present MemFlow, a real-time method for optical flow estimation and prediction with memory.
1 code implementation • 30 Jan 2024 • Yikai Wang, Chenjie Cao, Ke Fan, Qiaole Dong, YiFan Li, xiangyang xue, Yanwei Fu
Our research reveals that the fundamental sub-tasks of subject repositioning, which include filling the void left by the repositioned subject, reconstructing obscured portions of the subject and blending the subject to be consistent with surrounding areas, can be effectively reformulated as a unified, prompt-guided inpainting task.
1 code implementation • 4 Dec 2023 • Qiaole Dong, Bo Zhao, Yanwei Fu
Recently, Google proposes DDVM which for the first time demonstrates that a general diffusion model for image-to-image translation task works impressively well on optical flow estimation task without any specific designs like RAFT.
3 code implementations • 19 May 2023 • Chenjie Cao, Yunuo Cai, Qiaole Dong, Yikai Wang, Yanwei Fu
As an exemplar, we leverage LeftRefill to address two different challenges: reference-guided inpainting and novel view synthesis, based on the pre-trained StableDiffusion.
1 code implementation • CVPR 2023 • Qiaole Dong, Chenjie Cao, Yanwei Fu
In this paper, we propose a rethinking to previous optical flow estimation.
no code implementations • CVPR 2023 • Xiang Li, Xuelin Qian, Litian Liang, Lingjie Kong, Qiaole Dong, Jiejun Chen, Dingxia Liu, Xiuzhong Yao, Yanwei Fu
Particularly, we build a causal graph, and train the images to estimate the intraoperative attributes for final OS prediction.
2 code implementations • 12 Oct 2022 • Chenjie Cao, Qiaole Dong, Yanwei Fu
Specifically, given one corrupt image, we present the Transformer Structure Restorer (TSR) module to restore holistic structural priors at low image resolution, which are further upsampled by Simple Structure Upsampler (SSU) module to higher image resolution.
1 code implementation • 3 Aug 2022 • Chenjie Cao, Qiaole Dong, Yanwei Fu
To this end, this paper incorporates the pre-training based Masked AutoEncoder (MAE) into the inpainting model, which enjoys richer informative priors to enhance the inpainting process.
2 code implementations • CVPR 2022 • Qiaole Dong, Chenjie Cao, Yanwei Fu
The proposed model restores holistic image structures with a powerful attention-based transformer model in a fixed low-resolution sketch space.