Search Results for author: Maomao Li

Found 9 papers, 6 papers with code

A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization Inversion for Zero-Shot Video Editing

no code implementations10 Dec 2023 Maomao Li, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Zhihui Lin, Dong Xu

This paper presents a video inversion approach for zero-shot video editing, which aims to model the input video with low-rank representation during the inversion process.

Video Editing

ReliableSwap: Boosting General Face Swapping Via Reliable Supervision

1 code implementation8 Jun 2023 Ge Yuan, Maomao Li, Yong Zhang, Huicheng Zheng

To avoid the potential artifacts and drive the distribution of the network output close to the natural one, we reversely take synthetic images as input while the real face as reliable supervision during the training stage of face swapping.

Face Reenactment Face Swapping

Inserting Anybody in Diffusion Models via Celeb Basis

1 code implementation NeurIPS 2023 Ge Yuan, Xiaodong Cun, Yong Zhang, Maomao Li, Chenyang Qi, Xintao Wang, Ying Shan, Huicheng Zheng

Empowered by the proposed celeb basis, the new identity in our customized model showcases a better concept combination ability than previous personalization methods.

Fine-Grained Face Swapping via Regional GAN Inversion

no code implementations CVPR 2023 Zhian Liu, Maomao Li, Yong Zhang, Cairong Wang, Qi Zhang, Jue Wang, Yongwei Nie

We rethink face swapping from the perspective of fine-grained face editing, \textit{i. e., ``editing for swapping'' (E4S)}, and propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.

Disentanglement Face Swapping

Motion-aware Contrastive Video Representation Learning via Foreground-background Merging

1 code implementation CVPR 2022 Shuangrui Ding, Maomao Li, Tianyu Yang, Rui Qian, Haohang Xu, Qingyi Chen, Jue Wang, Hongkai Xiong

To alleviate such bias, we propose \textbf{F}oreground-b\textbf{a}ckground \textbf{Me}rging (FAME) to deliberately compose the moving foreground region of the selected video onto the static background of others.

Action Recognition Contrastive Learning +1

Self-Attention ConvLSTM for Spatiotemporal Prediction

2 code implementations AAAI 2020 Zhihui Lin, Maomao Li, Zhuobin Zheng, Yangyang Cheng, Chun Yuan

To extract spatial features with both global and local dependencies, we introduce the self-attention mechanism into ConvLSTM.

Video Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.