Search Results for author: Zhihui Lin

Found 5 papers, 4 papers with code

A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization Inversion for Zero-Shot Video Editing

no code implementations10 Dec 2023 Maomao Li, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Zhihui Lin, Dong Xu

This paper presents a video inversion approach for zero-shot video editing, which aims to model the input video with low-rank representation during the inversion process.

Video Editing

CMS-LSTM: Context Embedding and Multi-Scale Spatiotemporal Expression LSTM for Predictive Learning

1 code implementation6 Feb 2021 Zenghao Chai, Zhengzhuo Xu, Yunpeng Bai, Zhihui Lin, Chun Yuan

To tackle the increasing ambiguity during forecasting, we design CMS-LSTM to focus on context correlations and multi-scale spatiotemporal flow with details on fine-grained locals, containing two elaborate designed blocks: Context Embedding (CE) and Spatiotemporal Expression (SE) blocks.

Video Prediction

Self-Attention ConvLSTM for Spatiotemporal Prediction

2 code implementations AAAI 2020 Zhihui Lin, Maomao Li, Zhuobin Zheng, Yangyang Cheng, Chun Yuan

To extract spatial features with both global and local dependencies, we introduce the self-attention mechanism into ConvLSTM.

Video Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.