no code implementations • 4 Oct 2023 • Siyuan Yang, Lu Zhang, Liqian Ma, Yu Liu, Jingjing Fu, You He
In this paper, we propose MagicRemover, a tuning-free method that leverages the powerful diffusion models for text-guided image inpainting.
2 code implementations • 24 Jan 2023 • Kaidong Zhang, Jialun Peng, Jingjing Fu, Dong Liu
Transformers have been widely used for video processing owing to the multi-head self attention (MHSA) mechanism.
Ranked #1 on Video Inpainting on DAVIS (SSIM (square) metric)
no code implementations • ICCV 2023 • Hewei Guo, Liping Ren, Jingjing Fu, Yuwang Wang, Zhizheng Zhang, Cuiling Lan, Haoqian Wang, Xinwen Hou
Targeting for detecting anomalies of various sizes for complicated normal patterns, we propose a Template-guided Hierarchical Feature Restoration method, which introduces two key techniques, bottleneck compression and template-guided compensation, for anomaly-free feature restoration.
Ranked #11 on Anomaly Detection on MVTec LOCO AD
1 code implementation • 14 Aug 2022 • Kaidong Zhang, Jingjing Fu, Dong Liu
Especially in spatial transformer, we design a dual perspective spatial MHSA, which integrates the global tokens to the window-based attention.
1 code implementation • CVPR 2022 • Kaidong Zhang, Jingjing Fu, Dong Liu
We propose a flow completion network to align and aggregate flow features from the consecutive flow sequences based on the inertia prior.
no code implementations • CVPR 2018 • Yao Zhai, Jingjing Fu, Yan Lu, Houqiang Li
The RoI-based sub-region attention map and aspect ratio attention map are selectively pooled from the banks, and then used to refine the original RoI features for RoI classification.